1 / 23

Toward Grounding Knowledge in Prediction for a Computational Theory of Artificial Intelligence

This talk explores the need for grounding knowledge in prediction to develop a comprehensive computational theory of artificial intelligence. It discusses the challenges of building large AI systems, the three levels of understanding proposed by Marr, and the role of experience and macro-predictions in shaping knowledge. The importance of both prior and posterior grounding in AI systems is emphasized.

loreng
Télécharger la présentation

Toward Grounding Knowledge in Prediction for a Computational Theory of Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Toward Grounding Knowledge in PredictionorToward a Computational Theory of Artificial Intelligence Rich Sutton AT&T Labs with thanks to Satinder Singh and Doina Precup

  2. It’s Hard to Build Large AI Systems • Brittleness • Unforeseen interactions • Scaling • Requires too much manual complexity management • people must understand, intervene, patch and tune • like programming • Need more autonomy • learning, verification • internal coherence of knowledge and experience

  3. Marr’s Three Levels of Understanding • Marr proposed three levels at which any information-processing machine must be understood • Computational Theory Level • What is computed and why • Representation and Algorithm Level • Hardware Implementation Level • We have little computational theory for Intelligence • Many methods for knowledge representation, but no theory of knowledge • No clear problem definition • Logic

  4. Reinforcement Learning provides a little Computational Theory • Policies (controllers) : States  Pr(Actions) • Value Functions • 1-Step Models

  5. Outline of Talk • Experience • Knowledge  Prediction • Macro-Predictions • Mental Simulation offering a coherent candidate computational theory of intelligence

  6. Experience • AI agent should be embedded in an ongoing interaction with a world World Agent actions observations Experience = these 2 time series • Enables clear definition of the AI problem • Let {reward } be function of {observation } • Choose actions to maximize total reward • Experience provides something for knowledge to be about t t cf. textbook definitions

  7. What is Knowledge? What is Knowledge? Deny the physical world Deny existence of objects, people, space… Deny all non-answers, correspondence theories All we really know about is our experience Knowledge must be in terms of experience

  8. Grounded Knowledge A is always followed by B A,B observations if = A then = B if A( ) then B( ) A,B predicates if A( ) then B( ) Action conditioning: if A( ) and C( ) then B( ) All of these are predictions

  9. World Knowledge  Predictions • The world is a black box, known only by its I/O behavior (observations in response to actions) • Therefore, all meaningful statements about the world are statements about the observations it generates • The only observations worth talking about are future ones The only meaningful things to say about the world are predictions Therefore:

  10. Non-predictive “Knowledge” • Mathematical knowledge, theorems and proofs • always true, but tell us nothing about the world • not world knowledge • Uninterpretted signals, e.g., useful representations • real and useful, but not by themselves world knowledge, only an aid to acquiring it • Knowledge of the past • Policies • could be viewed as predictions of value • but by themselves are more like uninterpretted signals Predictions capture “regular”, descriptive world knowledge

  11. Grounded Knowledge A is always followed by B A,B observations if = A then = B if A( ) then B( ) A,B predicates if A( ) then B( ) Action conditioning: if A( ) and C( ) then B( ) 1-step preds. Still a pretty limited kind of knowledge. Can’t say anything beyond one step!

  12. Grounded Knowledge A is always followed by B A,B observations if = A then = B if A( ) then B( ) A,B predicates if A( ) then B( ) Action conditioning: if A( ) and C( ) then B( ) if A( ) and <arbitrary experiment> then B(<outcome>) 1-step preds. many steps later many steps long macro- pred. prior grounding posterior grounding

  13. Both Prior and Posterior Grounding are Needed • “Classical” AI systems omit prior grounding • e.g., “Tweety is a bird”, “John loves Mary” • sometimes called the “symbol grounding problem” • Modern AI sytems tend to skimp the posterior • supervised learning, Bayes nets, robotics… • It is not OK to leave posterior grounding to external, human observers • the information is just not in the machine • we don’t understand it; we haven’t done our job! • Yet this is such an appealing shortcut that we have almost always done it

  14. Outline of Talk • Experience • Knowledge  Prediction • Macro-Predictions • Mental Simulation offering a coherent candidate computational theory of intelligence

  15. Macro-Predictions (Options)a la Sutton, Precup & Singh, 1999 et al. Let : States  Pr(Actions) be an arbitrary policy Let b: States  Pr({0,1}) be a termination condition Then <,b> is a kind of experiment – do  until b=1 – measure something about the resulting experience Suppose we measure the outcome: – the state at the end of the experiment – the total reward during the experiment Then the macro-prediction for <,b> would predict Pr(end-state), E{total reward} given start-state This is a very general, expressive form of prediction

  16. Sutton, Precup, & Singh, 1999 Rooms Example Policy of one option:

  17. Planning with Macro-Predictions

  18. Learning Path-to-Goal with and without Hallway Macros (Options)

  19. Mental Simulation • Knowledge can be gained from experience • by actually performing experiments • But knowledge can also be gained without overt experience • we call this thinking, reasoning, planning, cognition… • This can be done through “thought experiments” • internal simulation of experience • generated from predictive knowledge • subject to learning methods as before • Much thought can be achieved this way...

  20. Illustration: Dynamic Mission Planning for UAVs Reward=25 15 • Mission: Fly over (observe) most valuable sites and return to base • Stochasticweather affects observability (cloudy or clear) of sites • Limited fuel • Intractable with classical optimal control methods • Temporal scales: • Tactics: which way to fly now • Strategies: which site to head for • Strategies compress space and time • Reduce no. states from ~1011 to ~106 • Reduce tour length from ~600 to ~6 • Reinforcement Learning with strategies and real-time control outperforms optimal tour plannerthat assumes static weather 8 ? 5 10 Base Expected Reward/ Mission 60 50 40 High Fuel Low Fuel 30 RL planning RL planning Static w/strategies w/strategies Replanner and real-time control Barto, Sutton, and Moll, Adaptive Networks Laboratory, University of Massachusetts

  21. What to compute and Why Reward Policy Value Functions Knowledge/ Predictions The ultimate goal is reward, but our AI spends most of its time with knowledge

  22. A Candidate Computational Theory of Artificial Intelligence • AI Agent should be focused on finding general macro-predictions of experience • Especially seeking predictions that enable rapid computation of values and optimal actions • Predictions and their associated experiments are the coin of the realm • they have a clear semantics, can be tested & learned • can be combined to produce other predictions, e.g. values • Mental Simulation (plus learning) • makes new predictions from old • start of a computational theory of knowledge use

  23. Conclusions • World knowledge must be expressed in terms of the data • Such posterior grounding is challenging, • lose expressiveness in the short term • lose external (human) coherence, explainability • But can be done step by step, • And brings palpable benefits • autonomous learning/verification/extension of knowledge • autonomous complexity management due to internal coherence • knowledge suited to general reasoning process – mental simulation • We must provide this grounding!

More Related