1 / 43

AI and Pathfinding

AI and Pathfinding. CS 4730 – Computer Game Design Some slides courtesy Tiffany Barnes, NCSU. AI Strategies. Reaction vs. Deliberation When having the NPC make a decision, how much thought goes into the next move? How is the AI different in: Frozen Synapse Kingdom Hearts Civilization

marrim
Télécharger la présentation

AI and Pathfinding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI and Pathfinding CS 4730 – Computer Game Design Some slides courtesy Tiffany Barnes, NCSU

  2. AI Strategies • Reaction vs. Deliberation • When having the NPC make a decision, how much thought goes into the next move? • How is the AI different in: • Frozen Synapse • Kingdom Hearts • Civilization • Halo 2

  3. AI Strategies • Reaction-Based • Fast, but limited capabilities • Implementations • Finite-State Machines • Rule-Based Systems • Set Pattern 3

  4. AI Strategies • Deliberation-Based • Much slower, but more adaptable • Implementations • A* / Dijkstra • Roadmaps • Genetic Algorithms 4

  5. Deliberation • Goal is to emphasize making the best possible decision by searching across all possibilities • Thus, deliberation tends to use various search algorithms across data structures that contain the option space 5

  6. “Sense – Plan – Act” • Sense (or perceive) a sufficiently complete model of the world • Plan by searching over possible future situations that would result from taking various actions • Act by executing the best course of action • Each possible outcome is effectively scored by a “metric of success” that indicates whether the choice should be taken or not 6

  7. Deliberative vs. Reactive • These are NOT mutually exclusive! • You can have reactive policies for immediate threats • Incoming PC fire • Environmental destruction • And you can have deliberative policies for long-term planning • Build orders and positioning 7

  8. Core Questions • How do you represent knowledge about the current task, environment, PC, etc? • How do you find actions that allow the goal to be met? 8

  9. Knowledge Representation • A decision tree is a common way to represent knowledge • The root node is the current state of the game state WRT the NPC / AI • Thus, deliberative AI techniques are effectively versions of search and shortest-path algorithms! 9

  10. Tic-Tac-Toe • What is the decision tree for Tic-Tac-Toe? 10

  11. The Goal State • The objective of the decision tree model is to move from the initial game state (root of the tree) to the goal state of the NPC • What might a goal state be? • When searching through the decision tree space, which path do we take? 11

  12. Cost and Reward • Every choice has a cost • Ammo • Movement • Time • Increase vulnerability to attack • Every choice has a reward • Opportunity to hit PC • Capture a strategic point • Gain new resources 12

  13. Minimax Algorithm • Find the path through the decision tree that yields the best outcome for one player, assuming the other player always makes a decision that would lead to the best outcome for themselves 13

  14. Naïve Search Algorithms • Breadth-First Search • At each depth, explore every option at the next depth • Depth-First Search • Fully explore one possible path to its “conclusion”, then backtrack to check other options • What are the problems with these techniques in gaming? 14

  15. Breadth-First Search • Expand Root node • Expand all Root node’s children • Expand all Root node’s grandchildren • Problem: Memory size Root Root Child2 Child1 Root Child2 Child1 GChild2 GChild1 GChild4 GChild3

  16. Uniform Cost Search • Modify Breadth-First by expanding cheapest nodes first • Minimize g(n) cost of path so far Root Child2 Child1 GChild4 8 GChild2 5 GChild3 3 GChild1 9

  17. Depth First Search • Always expand the node that is deepest in the tree Root Root Child1 Child1 Root GChild2 GChild1 Child1 GChild1

  18. Adding a Heuristic • Simple definition: a heuristic is a “mental shortcut” to ignore non-useful states to limit the search space and make the decision tree more reasonable • What metrics might we use to determine “the value” of a potential option? • What metrics might we use to determine “the cost” of a potential option? 18

  19. Adding a Heuristic • Creating an AI heuristic forms the basis of how the NPCs will behave • Will they ignore enemies that are farther than X away? • Will they avoid water? • Will they always move in the straightest path to the PC? 19

  20. Cheaper Distance First! 20

  21. Greedy Search • Expand the node that yields the minimum cost • Expand the node that is closest to target • Depth first • Minimize the function h(n) the heuristic cost function

  22. Greedy Search 22

  23. Greedy Search 23

  24. Greedy Search • Greedy gives us (often) a sub-optimal path, but it’s really cheap to calculate! • How can we improve on this? • Add another aspect to the function – the cost of the node + the heuristic distance 24

  25. A* • A best-first search (using heuristics) to find the least-cost path from the initial state to the goal state • The algorithm follows the path of lowest expected cost, keeping a priority queue of alternate path segments along the way 25

  26. A* Search • Minimize sum of costs • g(n) + h(n) • Cost so far + heuristic to goal • Guaranteed to work • If h(n) does not overestimate cost • Examples • Euclidean distance

  27. A* 27

  28. A* 28

  29. A* 29

  30. Navigation Grid 30

  31. Pathfinding in “real life” • These algorithms work great when the game is grid based • Square grid • Hex grid • For more “open” games, like FPSs: • Path nodes are placed on the map that NPCs can reach • Navigation mesh layers are added over the terrain • Often done automatically in advanced engines 31

  32. Navigation Mesh • Instead of using discrete node locations, a node in this instance is a convex polygon • Every point inside a valid polygon can be considered “fair game” to move into • Navigation meshes can be auto generated by the engine, so easier to manage than nodes • Can also handle different sized NPCs by checking collisions 32

  33. Navigation Mesh 33

  34. Groups • Groups stay together • All units move at same speed • All units follow the same general path • Units arrive at the same time Obstruction Goal

  35. Groups • Need a hierarchical movement system • Group structure • Manages its own priorities • Resolves its own collisions • Elects a commander that traces paths, etc • Commander can be an explicit game feature

  36. Formations • Groups with unit layouts • Layouts designed in advance • Additional States • Forming • Formed • Broken • Only formed formations can move

  37. Formations • Schedule arrival into position • Start at the middle and work outwards • Move one unit at a time into position • Pick the next unit with • Least collisions • Least distance • Formed units have highest priority • Forming units medium priority • Unformed units lowest

  38. Formations Not so good… 1 2 3 1 7 2 4 5 9 3 6 7 8 9 5 6 8 4 1 7 2 3 5 9 6 Better… 8 4

  39. Formations: Wheeling • Only necessary for non-symmetric formations Break formation here Stop motion temporarily 1 2 3 4 5 Set re-formation point here 5 4 3 2 1

  40. Formations: Obstacles Scale formation layout to fit through gaps 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 Subdivide formation around small obstacles 1 2 3 4 5

  41. Formations • Adopt a hierarchy of paths to simplify path-planning problems • High-level path considers only large obstacles • Perhaps at lower resolution • Solves problem of gross formation movement • Paths around major terrain features

  42. AI That Learns • Imagine a player in Madden calls a particular play on offense over and over and over • The heuristic values for certain states should change to reflect a more optimal strategy • Now, the adjustment of heuristic values represents long-term strategy (to a degree) 42

  43. AI That Evolves • Neural networks add in a mutation mechanic • Bayesian networks can also learn and add inferences • How much processing power can we use for this? • Is it better to truly have a “learning” AI, or should we just adjust some game parameters? 43

More Related