1 / 28

For Friday

Learn about genetic algorithms and their application in game playing in artificial intelligence. Understand the minimax algorithm, evaluation functions, cutting off search, alpha-beta pruning, and dealing with imperfect knowledge.

hartmelissa
Télécharger la présentation

For Friday

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. For Friday • Finish reading chapter 7 • Homework: • Chapter 6, exercises 1 (all) and 3 (a-c only)

  2. Program 1 • Any questions?

  3. Genetic Algorithms • Have a population of k states (or individuals) • Have a fitness function that evaluates the states • Create new individuals by randomly selecting pairs and mating them using a randomly selected crossover point. • More fit individuals are selected with higher probability. • Apply random mutation. • Keep top k individuals for next generation.

  4. Other Issues • What issues arise from continuous spaces? • What issues do online search and unknown environments create?

  5. Game Playing in AI • Long history • Games are well-defined problems usually considered to require intelligence to play well • Introduces uncertainty (can’t know opponent’s moves in advance)

  6. Games and Search • Search spaces can be very large: • Chess • Branching factor: 35 • Depth: 50 moves per player • Search tree: 35100 nodes (~1040 legal positions) • Humans don’t seem to do much explicit search • Good test domain for search methods and pruning methods

  7. Game Playing Problem • Instance of general search problem • States where game has ended are terminal states • A utility function (or payoff function) determines the value of the terminal states • In 2 player games, MAX tries to maximize the payoff and MIN is tries to minimize the payoff • In the search tree, the first layer is a move by MAX and the next a move by MIN, etc. • Each layer is called a ply

  8. Minimax Algorithm • Method for determining the optimal move • Generate the entire search tree • Compute the utility of each node moving upward in the tree as follows: • At each MAX node, pick the move with maximum utility • At each MIN node, pick the move with minimum utility (assume opponent plays optimally) • At the root, the optimal move is determined

  9. Recursive Minimax Algorithm function Minimax-Decision(game) returnsan operator foreach op in Operators[game] do Value[op] <- Mimimax-Value(Apply(op, game),game) end return the op with the highest Value[op] function Minimax-Value(state,game) returnsautility value if Terminal-Test[game](state) then return Utility[game](state) else if MAX is to move in statethen return highest Minimax-Value of Successors(state) else return lowest Minimax-Value of Successors(state)

  10. Making Imperfect Decisions • Generating the complete game tree is intractable for most games • Alternative: • Cut off search • Apply some heuristic evaluation function to determine the quality of the nodes at the cutoff

  11. Evaluation Functions • Evaluation function needs to • Agree with the utility function on terminal states • Be quick to evaluate • Accurately reflect chances of winning • Example: material value of chess pieces • Evaluation functions are usually weighted linear functions

  12. Cutting Off Search • Search to uniform depth • Use iterative deepening to search as deep as time allows (anytime algorithm) • Issues • quiescence needed • horizon problem

  13. Alpha-Beta Pruning • Concept: Avoid looking at subtrees that won’t affect the outcome • Once a subtree is known to be worse than the current best option, don’t consider it further

  14. General Principle • If a node has value n, but the player considering moving to that node has a better choice either at the node’s parent or at some higher node in the tree, that node will never be chosen. • Keep track of MAX’s best choice () and MIN’s best choice () and prune any subtree as soon as it is known to be worse than the current  or  value

  15. function Max-Value (state, game, ,) returns the minimax value of state if Cutoff-Test(state) then return Eval(state) for each s in Successors(state) do  <- Max(, Min-Value(s , game, ,)) if  >=  then return  end return  function Min-Value(state, game, ,) returns the minimax value of state if Cutoff-Test(state) then return Eval(state) for each s in Successors(state) do  <- Min(,Max-Value(s , game, ,)) if  <=  then return  end return 

  16. Effectiveness • Depends on the order in which siblings are considered • Optimal ordering would reduce nodes considered from O(bd) to O(bd/2)--but that requires perfect knowledge • Simple ordering heuristics can help quite a bit

  17. Chance • What if we don’t know what the options are? • Expectiminimax uses the expected value for any node where chance is involved. • Pruning with chance is more difficult. Why?

  18. Imperfect Knowledge • What issues arise when we don’t know everything (as in standard card games)?

  19. State of the Art • Chess – Deep Blue and Fritz • Checkers – Chinook • Othello – Logistello • Backgammon – TD-Gammon (learning) • Go – Computers are very bad • Bridge

  20. What about the games we play?

  21. Knowledge • Knowledge Base • Inference mechanism (domain-independent) • Information (domain-dependent) • Knowledge Representation Language • Sentences (which are not quite like English sentence) • The KRL determine what the agent can “know” • It also affects what kind of reasoning is possible • Tell and Ask

  22. Getting Knowledge • We can TELL the agent everything it needs to know • We can create an agent that can “learn” new information to store in its knowledge base

  23. The Wumpus World • Simple computer game • Good testbed for an agent • A world in which an agent with knowledge should be able to perform well • World has a single wumpus which cannot move, pits, and gold

  24. Wumpus Percepts • The wumpus’s square and squares adjacent to it smell bad. • Squares adjacent to a pit are breezy. • When standing in a square with gold, the agent will perceive a glitter. • The agent can hear a scream when the wumpus dies from anywhere • The agent will perceive a bump if it walks into a wall. • The agent doesn’t know where it is.

  25. Wumpus Actions • Go forward • Turn left • Turn right • Grab (picks up gold in that square) • Shoot (fires an arrow forward--only once) • If the wumpus is in front of the agent, it dies. • Climb (leave the cavern--only good at the start square)

  26. Consequences • Entering a square containing a live wumpus is deadly • Entering a square containing a pit is deadly • Getting out of the cave with the gold is worth 1,000 points. • Getting killed costs 10,000 points • Each action costs 1 point

  27. Possible Wumpus Environment

  28. Knowledge Representation • Two sets of rules: • Syntax: determines what atomic symbols exist in the language and how to combine them into sentences • Semantics: Relationship between the sentences and “the world”--needed to determine truth or falsehood of the sentences

More Related