1 / 29

Teaching Assistant

Teaching Assistant. Henry Lo henryzlo@gmail.com. Genetic Programming. Instead of just varying a number of parameters, we can evolve complete programs ( genetic programming ). Let us evolve a wall-following robot in grid-space world.

alida
Télécharger la présentation

Teaching Assistant

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teaching Assistant • Henry Lo • henryzlo@gmail.com Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  2. Genetic Programming • Instead of just varying a number of parameters, we can evolve complete programs (genetic programming). • Let us evolve a wall-following robot in grid-space world. • The robot’s behavior is determined by a LISP function. • We use four primitive functions: • AND(x, y) = 0 if x = 0; else y • OR(x, y) = 1 if x = 1; else y • NOT(x) = 0 if x = 1; else 1 • IF(x, y, z) = y if x = 1; else z Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  3. Genetic Programming • The robot receives sensory inputs n, ne, e, se, s, sw, w, and nw. These inputs are 0 whenever the corresponding cell is free, otherwise they are 1. • The robot can move either north, east, south, or west. • In genetic programming, we must make sure that all syntactically possible expressions in a program are actually defined and do not crash our system. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  4. Genetic Programming • We start with a population of 5000 randomly created programs and let them perform. • We let the robot start ten times, each time starting in a different position. • Each time, we let the robot perform 60 steps and count the number of different cells adjacent to a wall that the robot visits. • There are 32 such cells, so our fitness measure ranges from 0 (lowest fitness) to 320 (highest fitness). Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  5. Genetic Programming • Example for a perfect wall-following robot program in LISP: Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  6. Genetic Programming • The best-performing program among the 5000 randomly generated ones: Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  7. Genetic Programming • In generation i + 1, • 500 individuals are directly copied from generation i • 4500 are created by crossover operations between two parents chosen from the 500 winners. • In about 50 cases, mutation is performed. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  8. Genetic Programming • Example for a crossover operation: Mutation is performed by replacing a subtree of a program with a randomly created subtree. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  9. Genetic Programming • After six generations, the best program behaves like this: Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  10. Genetic Programming • And after ten generations, we already have a perfect program (fitness 320): Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  11. Genetic Programming • Here is a diagram showing the maximum fitness as a function of the generation number: Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  12. Game Player Evolution • You could simulate an evolutionary process to improve your Isola playing algorithm. • The easiest way to do this would be to use evolutionary learning to find the optimal weight vector in your static evaluation function, i.e., optimal weighting for each evaluation feature that you compute. • For example, assume that you are using the features f1 (number of neighboring squares) and f2 (number of reachable squares). • In each case, you actually use the difference between the value for yourself and the value for your opponent. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  13. Game Player Evolution • Then you could use weights w1 and w2 to compute your evaluation function: • e(p) = w1f1 + w2f2 • So the performance of your algorithm will depend on the weights w1 and w2. • This corresponds to the example of the computer vision algorithm with two free parameters. • Thus you could use an evolutionary process to find the best values for w1 and w2 just like in that example. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  14. Game Player Evolution • But how can you determine which individuals survive and procreate? • Well, one possibility would be to hold a tournament in which all individuals compete (or many smaller tournaments), and only the best n individuals will reach the next generation, i.e., the next tournament. • The other individuals are deleted and replaced with new individuals that use similar weights as the winners. • This way you will evolve algorithms of better and better performance, or in other words, you will approach the best values for w1 and w2. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  15. Game Player Evolution • You could slightly modify the game package to implement this principle of evolution. • When you have obtained the best values for w1 and w2 (or in your case maybe w1, w2, …, w37), just transfer these values into your original program in the original game package. • Your program should now play significantly better than it did prior to its evolutionary improvement. • Try it out! Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  16. Back to “Serious” Topics… • Knowledge Representation • and Reasoning Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  17. Knowledge Representation & Reasoning • Knowledge representation is the study of how knowledge about the world can be represented and what kinds of reasoning can be done with that knowledge. • We will discuss two different systems that are commonly used to represent knowledge in machines and perform algorithmic reasoning: • Propositional calculus • Predicate calculus Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  18. Propositional Calculus • In propositional calculus, • features of the world are represented by propositions, • relationships between features (constraints) are represented by connectives. • Example: • LECTURE_BORING  TIME_LATE  SLEEP • This expression in propositional calculus represents the fact that for some agent in our world, if the features LECTURE_BORING and TIME_LATE are both true, the feature SLEEP is also true. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  19. Propositional Calculus • You see that the language of propositional calculus can be used to represent aspects of the world. • When there are • a language, as defined by a syntax, • inference rules for manipulating sentences in that language, and • semantics for associating elements of the language with elements of the world, • then we have a system called logic. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  20. The Language • Atoms: • The atoms T and F and all strings that begin with a capital letter, for instance, P, Q, LECTURE_BORING, and so on. • Connectives: •  “or” •  “and” •  “implies” or “if-then” •  “not” Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  21. The Language • Syntax of well-formed formulas (wffs): • Any atom is a wff. • If 1 and 2 are wffs, so are • 1  2 (conjunction) • 1  2 (disjunction) • 1  2 (implication) • 1 (negation) • There are no other wffs. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  22. The Language • Atoms and negated atoms are called literals. • In 1  2 , 1 is called the antecedent, and 2 is called the consequent of the implication. • Examples of wffs (sentences): • (P  Q)  P • P  P • P  P  P • (P  Q)  (Q  P) P • The precedence order of the above operators is     For example, P  Q  R means ((P)  Q)  R. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  23. Rules of Inference • We use rules of inference to generate new wffs from existing ones. • One important rule is called modus ponens or the law of detachment. It is based on the tautology (P  (P  Q))  Q. We write it in the following way: • P • P  Q • _____ •  Q The two hypotheses P and P  Q are written in a column, and the conclusionbelow a bar, where  means “therefore”. Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  24. Rules of Inference Q P  Q _____  P Modus tollens Addition • P • ______ •  PQ P  Q Q  R _______  P  R PQ _____  P Hypothetical syllogism Simplification P Q ______  PQ PQ P _____  Q Conjunction Disjunctive syllogism Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  25. Proofs • The sequence of wffs {1, 2, …, n} is called a proof (or a deduction) of n from a set of wffs  iff (if and only if) each i in the sequence is either in  or can be inferred from one or more wffs earlier in the sequence by using one of the rules of inference. • If there is a proof of n from , we say that n is a theorem of the set . We use the following notation: •  |_ n • In this notation, we can also indicate the set of inference rules R that we use: •  |_ R n Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  26. P P  Q R Q Q  R Proofs • Example: • Given a set of wffs  = {P, R, P  Q}, the following sequence is a proof of Q  R given the inference rules that we discussed earlier: • {P, P  Q, Q, R, Q  R} • Tree representation: Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  27. Semantics • In propositional logic, we associate atoms with propositions about the world. • We thereby specify the semantics of our logic, giving it a “meaning”. • Such an association of atoms with propositions is called an interpretation. • In a given interpretation, the proposition associated with an atom is called the denotation of that atom. • Under a given interpretation, atoms have values – True or False. We are willing to accept this idealization (otherwise: fuzzy logic). Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  28. Semantics • Example: • “Gary is either intelligent or a good actor. • If Gary is intelligent, then he can count from 1 to 10. • Gary can only count from 1 to 2. • Therefore, Gary is a good actor.” • Propositions: • I: “Gary is intelligent.” • A: “Gary is a good actor.” • C: “Gary can count from 1 to 10.” Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

  29. Semantics • I: “Gary is intelligent.”A: “Gary is a good actor.”C: “Gary can count from 1 to 10.” • Step 1: C Hypothesis • Step 2: I  C Hypothesis • Step 3: I Modus Tollens Steps 1 & 2 • Step 4: A  I Hypothesis • Step 5: A Disjunctive SyllogismSteps 3 & 4 • Conclusion: A (“Gary is a good actor.”) Introduction to Artificial Intelligence Lecture 10: Machine Evolution II

More Related