1 / 45

From Reflex to Reason

From Reflex to Reason. Rich Sutton AT&T Labs with thanks to Satinder Singh, Doina Precup, and Andy Barto. Overall Goal. A computational understanding of a broad span of the mind’s activities what it computes why it computes it At a high level, without

urbano
Télécharger la présentation

From Reflex to Reason

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. From Reflex to Reason Rich Sutton AT&T Labs with thanks to Satinder Singh, Doina Precup, and Andy Barto

  2. Overall Goal A computational understanding of a broad span of the mind’s activities what it computes why it computes it At a high level, without specifics of sensory and motor systems specific representations and algorithms neural implementations language What does the mind do? Is there an overall, simple answer? Marr’s 3 levels

  3. Main Claims Prediction Semantics • Mind is about predictions • making predictions • discovering what predictions can be made • Knowledge is predictions • action-contingent and temporally-flexible predictions • agent-centric, grounded in experience from the bottom up • The mind’s ultimate goal is to make reward-maximizing decisions • but most of its effort is devoted to subgoal of prediction • A few simple mechanisms enable working flexibly with predictions • TD learning and Bellman backups

  4. Pred. of X new link existing link Y Response X Prediction Semantics • A prediction is a signal with meaning • Knowing that one signal is a prediction of another enables it to do useful work for you • When something new predicts X, you know what to do • Prediction semantics constrains in two directions

  5. Outline/Steps • Reflexes and their conditioning • Learning to get reward • Planning, by mental simulation • Knowledge, as temporally flexible predictions • Reason, as flexible use of knowledge These together are much of what the mind does Can we explain them all in a uniform way?

  6. Pavlovian Conditioning,the Conditioning of Reflexes Almost any reflex can be conditioned: salivation orienting heart rate, blood pressure gill withdrawall nausea, taste aversion fear, secondary reinforcers CER: freezing, suppression neutral stimuli CS Tone US Eyeshock Eyeblink before learning UR Eyeblink after learning CR (No US) • Animal can be viewed as learning that the CSpredicts the US • And then responding in anticipation • But Why? Why should a prediction of the US produce the • same response as the US?

  7. (Inadequate) Comp. Theories of CC • Instrumental theories -- the CR makes the US feel better • Works well for eyeblink, salivation, not for 2ndary reinforcers • Does not explain the similarity of CR and UR • Does not explain apparent conflict of CC and instrumental • Anticipation theories -- whatever you are going to do, CC causes you to do it earlier • Why earlier? Earlier is not always better! • How much earlier? CR tends to occur at time of US • Prediction theories -- CC is learning to predict the US • Works for fear, CER, 2ndary reinforcers • Does not explain response to CR or to UR • Explains “What” but not “Why”

  8. Pred Rep’n Theory of Conditioning The reflex is not USResponse: reflex US R reflex US R NOT OR learnable learnable CS CS But Prediction of USResponse: learnable Pred of US USs could habituate! reflex US R + learnable CS

  9. supervisory US cue + prediction of supervisory US Response CS 1 CS 2 Pred Rep’n Theory of Conditioning (2) • Consider an innate, learnable association USResponse • represents an innate guess, e.g., that a shock now is good predictor of a shock coming up • but could be wrong • Predicts URs could habituate, change over time depending on their relationship to themselves Long USs predict themselves * * Short USs are poor self-predictors * *

  10. Pred Rep’n Theory of Conditioning (3) • Implications for response topography/generation • predicts maximal CR at time of US onset (correct) • predicts CR onset only so early as to enable this • predicts threshold phenomena in CR production • predicts interaction of threshold with relative effectiveness of reinforced and unreinforced trials response topography CR US

  11. Outline/Steps • Reflexes and their conditioning • Learning to get reward • Planning, by mental simulation • Knowledge, as temporally flexible predictions • Reason, as flexible use of knowledge

  12. The Reward Hypothesis That purposes can be adequately represented as maximization of the cumulative sum of a scalar reward signal received from the environment • Is this reasonable? • Is it demeaning? • Is there no other choice? • It seems to be adequate and perhaps completely satisfactory

  13. Reinforcement Learning Theory:What to Compute and Why • Policies : States  Pr(Actions) • Value Functions ¥ start in follow å p - 1 t ( ) , = g = p V s E reward s s 0 t = 1 t • 1-Step Models Predictions!

  14. Honeybee Brain & VUM Neuron Hammer, Menzel

  15. Torque applied here q 1 q 2 The Acrobot Problem Goal: Raise tip above line e.g., Dejong & Spong, 1994 Sutton, 1995 fixed base Minimum–Time–to–Goal: 4 state variables: 2 joint angles 2 angular velocities CMAC of 48 layers RL same as Mountain Car tip Reward = -1 per time step

  16. Value, a pred. of reward Representations of state and action Action Selection Pick the highest valued action fixed link learned links reward Prediction Semantics of RL An action that predicts reward in a state... should to that extent be favored in that state

  17. Examples of Reinforcement Learning • Robocup Soccer Teams Stone & Veloso, Reidmiller et al. • World’s best player of simulated soccer, 1999; Runner-up 2000 • Inventory Management Van Roy, Bertsekas, Lee & Tsitsiklis • 10-15% improvement over industry standard methods • Dynamic Channel Assignment Singh & Bertsekas, Nie & Haykin • World's best assigner of radio channels to mobile telephone calls • Elevator Control Crites & Barto • (Probably) world's best down-peak elevator controller • Many Robots • navigation, bi-pedal walking, grasping, switching between skills... • TD-Gammon and Jellyfish Tesauro, Dahl • World's best backgammon player

  18. TD-Gammon T e s a u r o , 1 9 9 2 - 1 9 9 5 A c t i o n s e l e c t i o n . . . V a l u e . . . b y 2 - 3 p l y s e a r c h . . . . . . T D E r r o r - V V t t + 1 S t a r t w i t h a r a n d o m N e t w o r k P l a y m i l l i o n s o f g a m e s a g a i n s t i t s e l f L e a r n a v a l u e f u n c t i o n f r o m t h i s s i m u l a t e d e x p e r i e n c e T h i s p r o d u c e s a r g u a b l y t h e b e s t p l a y e r i n t h e w o r l d

  19. Prediction Semantics in TD-Gammon • A prediction of winning can substitute for winning • the central idea of Temporal-Difference (TD) learning • learning a prediction from a prediction! • also key idea of dynamic programming • and all heuristic search • In lookahead search, predictions are composed to produce longer-term predictions • key to all state-space planning • suggests prediction semantics is key element of reasoning

  20. Outline/Steps • Reflexes and their conditioning • Learning to get reward • Planning, by mental simulation • Knowledge, as temporally flexible predictions • Reason, as flexible use of knowledge

  21. Planning as RL over Mental Simulation I.e., learning on model-generated experience: 1. Learn a model of the world’s transition dynamics transition probabilities, expected immediate rewards “1-step model” of the world 2. Use model to generate imaginary experiences internal thought trials, mental simulation (Craik, 1943) 3. Apply RL as if experience had really happened Reward Policy Value Function 1-Step Model

  22. Dyna Algorithm value/policy acting planning 1. s current state 2. Choose an action, a, and take it 3. Receive next state, s’, and reward, r 4. Apply RL backup to s, a, s’, r e.g., Q-learning update 5. Update Model(s, a) with s’, r 6. Repeat k times: - select a previously seen state-action pair s,a - s’, r Model(s, a) - Apply RL backup to s, a, s’, r 7. Go to 1 direct RL model experience model learning

  23. State-Space Searchis based on a Prediction Semantics in seeking to evaluate this state we use predictions from these

  24. Prediction Semantics in Planningis just like in TD-Gammon • Predictions substitute for path outcomes • Predictions are composed to predict consequences of arbitrary sequences of action

  25. Naïve RL Theory of Reason Reason is RL on model-generated experience • Pro: • Very simple, uniform, general • Sufficient to reproduce e.g., latent learning • Con • Seems too low-level • Represents only a limited kind of knowledge Reward Policy Value Function 1-Step Model

  26. Outline/Steps • Reflexes and their conditioning • Learning to get reward • Planning, by mental simulation • Knowledge, as temporally flexible predictions • Reason, as flexible use of knowledge

  27. Experience A mind interacts with its world Agent World actions observations To produce two time series: Actions: , , , ? L a a a a L - 3 - 2 - 1 t t t t Experience Observations: , , , ? ? L o o o o L - 3 - 2 - 1 t t t t Experience is the data; it is all we really know Experience provides something for knowledge to be about

  28. World Knowledge  Predictions • The world is a black box, known only by its I/O behavior (observations in response to actions) • Therefore, all meaningful statements about the world are statements about the observations it generates • The only observations worth talking about are future ones The only meaningful things to say about the world are predictions Therefore: Predictions = statements about the joint distribution of future observations and actions

  29. Non-predictive “Knowledge” • Mathematical knowledge, theorems and proofs • always true, but tell us nothing about the world • not world knowledge • Uninterpretted signals, e.g., useful representations • real and useful, but not by themselves world knowledge, only an aid to acquiring it • Knowledge of the past • Policies • could be viewed as predictions of value • but by themselves are more like uninterpretted signals Predictions capture “regular”, descriptive world knowledge

  30. Every Prediction must be Grounded in Two Directions history of actions & observations prediction if I do action 1, then obs 12 will be 0 for three steps recognition grounding prediction grounding “symbol grounding” “prediction semantics”

  31. Both Recognition and Prediction Grounding are Needed • “Classical” AI systems omit recognition grounding • e.g., “Tweety is a bird”, “John loves Mary” • sometimes called the “symbol grounding problem” • Modern AI sytems tend to skimp prediction grounding • supervised learning, Bayes nets, robotics… • It is not OK to leave prediction grounding to external, human observers • the information is just not in the machine • we don’t understand it; we haven’t done our job! • Yet this is such an appealing shortcut that we have almost always done it

  32. Sutton, Precup & Singh AIJ 1999 etal. Prediction Semantics formalized as Macros-Actions Let : States  Pr(Actions) be an arbitrary policy Let b: States  Pr({0,1}) be a termination condition Then macro-action <,b> is a kind of experiment – do  until b says “stop” – measure something about the resulting experience Suppose we measure – the state at the end of the experiment – the total reward during the experiment Then the macro prediction for <,b> would predict Pr(end-state), E{total reward} given start-state Predictions of this form can represent a lot... ...possibly all world knowledge

  33. Sutton, Precup, & Singh, 1999 Rooms Example 4 stochastic primitive actions H A L L W A Y S u p F a i l 3 3 % l e f t r i g h t o f t h e t i m e o G d o w n 1 1 8 multi-step macro-actions G 2 o ( t o e a c h r o o m ' s 2 h a l l w a y s ) 2 Policy of one macro-action:

  34. Planning with Macro-Predictions macro-actions

  35. Learning Path-to-Goal with and without Hallway Macros-Actions

  36. any state (106) sites only (6) Illustration: Reconnaissance Mission Planning (Problem) • Mission: Fly over (observe) most valuable sites and return to base • Stochasticweather affects observability (cloudy or clear) of sites • Limited fuel • Intractable with classical optimal control methods • Temporal scales: • Actions: which direction to fly now • Options: which site to head for • Options compress space and time • Reduce steps from ~600 to ~6 • Reduce states from ~1011 to ~106

  37. Illustration: Reconnaissance Mission Planning (Results) Expected Reward/Mission • SMDP planner: • Assumes options followed to completion • Plans optimal SMDP solution • SMDP planner with re-evaluation • Plans as if options must be followed to completion • But actually takes them for only one step • Re-picks a new option on every step • Static planner: • Assumes weather will not change • Plans optimal tour among clear sites • Re-plans whenever weather changes High Fuel Low Fuel SMDP planner with re-evaluation of options on each step SMDP Planner Static Re-planner Temporal abstraction finds better approximation than static planner, with little more computation than SMDP planner

  38. Outline/Steps • Reflexes and their conditioning • Learning to get reward • Planning, by mental simulation • Knowledge, as temporally flexible predictions • Reason, as flexible use of knowledge

  39. Reason Combining knowledge to obtain new knowledge, flexibly andgenerally We must be able to reason about any event as a possible(sub)goal, not just about rewards This is the final step

  40. Subgoals • Many natural macro-actions are goal-oriented • E.g., drive-to-work, open-the-door • So replicate planning in-miniature for each subgoal • Macros can then be learned to achieve each subgoal • Many can be learned at once, independently • Solves classic problem of subgoal credit assignment • Solves psychological puzzle of goal-oriented action • Models of such macros are goal-oriented recognizers • correspond to classical “concepts” • e.g., a “chair” state is one where sitting is predicted to work rooms example

  41. 0 . 7 u p p e r h a l l w a y 0 . 6 s u b g o a l i d e a l v a l u e s R M S E r r o r i n l o w e r 0 . 5 h a l l w a y s u b g o a l v a l u e s s u b g o a l 0 . 4 0 . 3 l e a r n e d T w o s u b g o a l v a l u e s 0 . 2 s t a t e v a l u e s 0 . 1 0 2 0 , 0 0 0 4 0 , 0 0 0 6 0 , 0 0 0 8 0 , 0 0 0 1 0 0 , 0 0 0 0 T i m e S t e p s T i m e s t e p s Rooms ExampleIndependent learning of all 8 Subgoals 0 . 4 0 . 3 0 . 2 0 . 1 0 0 2 0 , 0 0 0 4 0 , 0 0 0 6 0 , 0 0 0 8 0 , 0 0 0 1 0 0 , 0 0 0 All 8 hallway macros and predictions are learned accurately and efficiently while actions are selected totally at random

  42. Co-Existence of Hedonism and Exploration/Constructivism • The ultimate goal is still reward • Still one primary policy and set of values • But many other policies, values, and predictions are learned not directly in service of reward • Most time is spent in exploration and discovery, gaining knowledge rather than reward: • What possibilities does the world afford? • How can I control and predict it in a variety of ways? • What concepts can be learned that might help later? • From hedonism to curiosity and constructivism

  43. Main Claims Prediction Semantics • Mind is about predictions • making predictions • discovering what predictions can be made • Knowledge is predictions • action-contingent and temporally-flexible predictions • agent-centric, grounded in experience from the bottom up • The mind’s ultimate goal is to make reward-maximizing decisions • but most of its effort is devoted to subgoal of prediction • A few simple mechanisms enable working flexibly with predictions • TD learning and Bellman backups

  44. What is New? • The formalization of macro-actions • provide temporal abstraction • as well as action contingency (experiments) • mesh seemlessly with learning and planning methods • Using the goal-oriented machinery of RL • for knowledge construction • for perceptual concepts • Taking the discipline of predictive knowledge seriously • speaking only in terms of the subjective, experiential data

  45. Should Knowledge be Experiential?Allowing only Predictions in terms of Data? loses • Expressiveness • can’t talk about objects, space, people; no “is-a” or “part-of” • External (human) coherence • verbal labels, interpretability, explainability, calibration • the “shortcut” of entering knowledge directly into the agent gains • The knowledge will have meaning to the machine • It can be mechanically learned/verified/extended • It will be suited for a general reasoning processes • composition and backup of predictions to yield new predictions

More Related