Friday, February 25, 2011

Improved Behavior Tree + Unity

This week I worked to use the feedback I had received to improve my behavior tree.


Here is the logic behind the new and improved tree:

  • Assumption: The enemy is all-knowing (knows where enemies are, where target is, etc)
  • 4 basic actions: Walk, Run, Flee, Hide
    • Note that these are subtrees (since they are behavior trees themselves
  • 2 agent states: Detected, Undetected
    • When Detected, Flee or Run until Undetected
    • When Undetected, continue to make progress toward goal, unless Detected
  • In any given state, the way an agent decides which action to take depends on its risk attitude
    • Each agent will have parameter r_loving and r_averse, which sum to 1.0
    • These parameters will be passed into stochastic selectors to determine which action to take next
    • (I also plan to incorporate enemy distance into the calculation when computing probabilities for stochastic selectors, but I haven't come up with a precise equation yet)
I also played around some in Unity.  I am now able to dynamically create an arbitrary map from a .txt config file in Unity.  The camera view is orthographic, since I am working in 2D.  I can also use the arrow keys to move my agent around for testing.



Things to do next:
  • I'd like to add a visual marker on the agent to indicate what direction he's facing
  • Have the camera following the target
  • A* pathfinding

Thursday, February 17, 2011

Learning Unity

This week, I did some more reading on behavior trees and have new ideas for how to structure my behavior tree.

To better understand behavior trees and why they're effective, it was helpful to first do some background reading on hierarchical finite state machines and hierarchical task network planners, two other commonly used methods for AI development.  HFSMs is simply a hierarchy of FSMs, which allows us to take advantage of states that share common transitions.  The downside is that, while transitions can be reused, the states are not modular and so are not easily reusable.

HTNs on the other hand take in an initial state, desired goal, and set of possible tasks to produce a sequence of actions that lead from the initial state to the goal.  Constraints are represented in task networks, which can get very complex very quickly.

Behavior trees simplify these representations by distilling them into distinct behaviors.  Using selectors, sequences, and decorators, you can pretty much build any complex behavior from these simple operations.

The behavior tree approach is very different from the initial research and literature review I conducted on stealthy agents.  The methods used in my initial research, such as corridor maps and knowledge-based probability maps, rely on heavy preprocessing of environmental variables to calculate an optimal path.  This does not allow the agent to react as easily to a changing environment.  The ability for an agent to react to changes in its environment is a definite advantage of using behavior trees.

To improve my behavior tree, I plan to categorize behaviors into undetected and detected.  Undetected behaviors will be further broken down into risk-loving, risk-neutral, and risk-averse behaviors.  These attitudes toward risk will guide the agent's decision-making (such as whether he should he run across an open space and risk being detected by an enemy unit).  Detected behaviors will include strategies that the agent should employ once it has been detected by enemy, such as running away quickly, running and then hiding in a hiding place, etc.  Once the agent has evaded detection, he should resume undetected behaviors to get to the target location.  I'll need to break this down further, but that's the big picture strategy for now.

I also started playing around in Unity and attended the Unity tutorial on Sunday.  I'm working to get basic movement of the agent.  I have him moving, but have had some trouble getting the collision detection to work.

Thursday, February 10, 2011

Behavior Tree

This week, I developed a first draft of a behavior tree to model agent behavior.  Behaviors I considered included: walking, changing speed, hiding, fleeing.  These behaviors react to changing conditions in the environment, specifically with respect to the location of enemies.  It's still a rough model, so I welcome any feedback/ideas!



Thursday, February 3, 2011

Incorporating Feedback

While my initial research on the corridor map method (CMM) and knowledge-based probability maps was helpful in understanding the latest research developments in stealthy pathfinding, I realized this week that these approaches may not be the best starting point for my project.  Feedback from Joe and Norm suggested that I look into behavior trees as a way of representing the behavior of my stealth agents.

After meeting with Joe this week, I got a better sense of how to break down my project.  I will need to spend some time setting up my development framework and learning the tools (C# and Unity) I need for my project.  This week, I will play around with C# and Unity, try to create a 2D map in Unity, and draft a behavior tree for feedback.