1 / 75

CSCE 552 Fall 2012

CSCE 552 Fall 2012. AI. By Jijun Tang. Homework 3. List of AI techniques in games you have played; Select one game and discuss how AI enhances its game play or how its AI can be improved Due Nov 28th. Command Hierarchy. Strategy for dealing with decisions at different levels

jadzia
Télécharger la présentation

CSCE 552 Fall 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCE 552 Fall 2012 AI By Jijun Tang

  2. Homework 3 • List of AI techniques in games you have played; • Select one game and discuss how AI enhances its game play or how its AI can be improved • Due Nov 28th

  3. Command Hierarchy • Strategy for dealing with decisions at different levels • From the general down to the foot soldier • Modeled after military hierarchies • General directs high-level strategy • Foot soldier concentrates on combat

  4. Dead Reckoning • Method for predicting object’s future position based on current position, velocity and acceleration • Works well since movement is generally close to a straight line over short time periods • Can also give guidance to how far object could have moved • Example: shooting game to estimate the leading distance

  5. Emergent Behavior • Behavior that wasn’t explicitly programmed • Emerges from the interaction of simpler behaviors or rules • Rules: seek food, avoid walls • Can result in unanticipated individual or group behavior

  6. Flocking/Formation

  7. Mapping Example

  8. Level-of-Detail AI • Optimization technique like graphical LOD • Only perform AI computations if player will notice • For example • Only compute detailed paths for visible agents • Off-screen agents don’t think as often

  9. Manager Task Assignment • Manager organizes cooperation between agents • Manager may be invisible in game • Avoids complicated negotiation and communication between agents • Manager identifies important tasks and assigns them to agents • For example, a coach in an AI football team

  10. Example Amit [to Steve]: Hello, friend! Steve [nods to Bryan]: Welcome to CGDC. [Amit exits left.] Amit.turns_towards(Steve); Amit.walks_within(3); Amit.says_to(Steve, "Hello, friend!"); Amit.waits(1); Steve.turns_towards(Bryan); Steve.walks_within(5); Steve.nods_to(Bryan); Steve.waits(1); Steve.says_to(Bryan, "Welcome to CGDC."); Amit.waits(3); Amit.face_direction(DIR_LEFT); Amit.exits();

  11. Example Player escapes in combat, pop Combat off, goes to search; if not find the player, pop Search off, goes to patrol, …

  12. Example

  13. Bayesian Networks • Performs humanlike reasoning when faced with uncertainty • Potential for modeling what an AI should know about the player • Alternative to cheating • RTS Example • AI can infer existence or nonexistence of player build units

  14. Example

  15. Bayesian Networks • Inferring unobserved variables • Parameter learning • Structure learning

  16. Blackboard Architecture • Complex problem is posted on a shared communication space • Agents propose solutions • Solutions scored and selected • Continues until problem is solved • Alternatively, use concept to facilitate communication and cooperation

  17. Decision Tree Learning • Constructs a decision tree based on observed measurements from game world • Best known game use: Black & White • Creature would learn and form “opinions” • Learned what to eat in the world based on feedback from the player and world

  18. Filtered Randomness • Filters randomness so that it appears random to players over short term • Removes undesirable events • Like coin coming up heads 8 times in a row • Statistical randomness is largely preserved without gross peculiarities • Example: • In an FPS, opponents should randomly spawn from different locations (and never spawn from the same location more than 2 times in a row).

  19. Genetic Algorithms • Technique for search and optimization that uses evolutionary principles • Good at finding a solution in complex or poorly understood search spaces • Typically done offline before game ships • Example: • Game may have many settings for the AI, but interaction between settings makes it hard to find an optimal combination

  20. Flowchat

  21. N-Gram Statistical Prediction • Technique to predict next value in a sequence • In the sequence 18181810181, it would predict 8 as being the next value • Example • In street fighting game, player just did Low Kick followed by Low Punch • Predict their next move and expect it

  22. Neural Networks • Complex non-linear functions that relate one or more inputs to an output • Must be trained with numerous examples • Training is computationally expensive making them unsuited for in-game learning • Training can take place before game ships • Once fixed, extremely cheap to compute

  23. Example

  24. Planning • Planning is a search to find a series of actions that change the current world state into a desired world state • Increasingly desirable as game worlds become more rich and complex • Requires • Good planning algorithm • Good world representation • Appropriate set of actions

  25. Player Modeling • Build a profile of the player’s behavior • Continuously refine during gameplay • Accumulate statistics and events • Player model then used to adapt the AI • Make the game easier: player is not good at handling some weapons, then avoid • Make the game harder: player is not good at handling some weapons, exploit this weakness

  26. Production (Expert) Systems • Formal rule-based system • Database of rules • Database of facts • Inference engine to decide which rules trigger – resolves conflicts between rules • Example • Soar used experiment with Quake 2 bots • Upwards of 800 rules for competent opponent

  27. Reinforcement Learning • Machine learning technique • Discovers solutions through trial and error • Must reward and punish at appropriate times • Can solve difficult or complex problems like physical control problems • Useful when AI’s effects are uncertain or delayed

  28. Reputation System • Models player’s reputation within the game world • Agents learn new facts by watching player or from gossip from other agents • Based on what an agent knows • Might be friendly toward player • Might be hostile toward player • Affords new gameplay opportunities • “Play nice OR make sure there are no witnesses”

  29. Smart Terrain • Put intelligence into inanimate objects • Agent asks object how to use it: how to open the door, how to set clock, etc • Agents can use objects for which they weren’t originally programmed for • Allows for expansion packs or user created objects, like in The Sims • Enlightened by Affordance Theory • Objects by their very design afford a very specific type of interaction

  30. Speech Recognition • Players can speak into microphone to control some aspect of gameplay • Limited recognition means only simple commands possible • Problems with different accents, different genders, different ages (child vs adult)

  31. Text-to-Speech • Turns ordinary text into synthesized speech • Cheaper than hiring voice actors • Quality of speech is still a problem • Not particularly natural sounding • Intonation problems • Algorithms not good at “voice acting”: the mouth needs to be animated based on the text • Large disc capacities make recording human voices not that big a problem • No need to resort to worse sounding solution

  32. Weakness Modification Learning • General strategy to keep the AI from losing to the player in the same way every time • Two main steps 1. Record a key gameplay state that precedes a failure 2. Recognize that state in the future and change something about the AI behavior • AI might not win more often or act more intelligently, but won’t lose in the same way every time • Keeps “history from repeating itself”

  33. Artificial Intelligence: Pathfinding

  34. PathPlannerApp Demo

  35. Representing the Search Space • Agents need to know where they can move • Search space should represent either • Clear routes that can be traversed • Or the entire walkable surface • Search space typically doesn’t represent: • Small obstacles or moving objects • Most common search space representations: • Grids • Waypoint graphs • Navigation meshes

  36. Grids • 2D grids – intuitive world representation • Works well for many games including some 3D games such as Warcraft III • Each cell is flagged • Passable or impassable • Each object in the world can occupy one or more cells

  37. Characteristics of Grids • Fast look-up • Easy access to neighboring cells • Complete representation of the level

  38. Waypoint Graph • A waypoint graph specifies lines/routes that are “safe” for traversing • Each line (or link) connects exactly two waypoints

  39. Characteristicsof Waypoint Graphs • Waypoint node can be connected to any number of other waypoint nodes • Waypoint graph can easily represent arbitrary 3D levels • Can incorporate auxiliary information • Such as ladders and jump pads • Radius of the path

  40. Navigation Meshes • Combination of grids and waypoint graphs • Every node of a navigation mesh represents a convex polygon (or area) • As opposed to a single position in a waypoint node • Advantage of convex polygon • Any two points inside can be connected without crossing an edge of the polygon • Navigation mesh can be thought of as a walkable surface

  41. Navigation Meshes (continued)

  42. Computational Geometry • CGAL (Computational Geometry Algorithm Library) • Find the closest phone • Find the route from point A to B • Convex hull

  43. Example—No Rotation

  44. Space Split

  45. Resulted Path

  46. Improvement

  47. Example 2—With Rotation

  48. Example 3—Visibility Graph

  49. Random Trace • Simple algorithm • Agent moves towards goal • If goal reached, then done • If obstacle • Trace around the obstacle clockwise or counter-clockwise (pick randomly) until free path towards goal • Repeat procedure until goal reached

  50. Random Trace (continued) • How will Random Trace do on the following maps?

More Related