1 / 44

Welcome

Welcome. Welcome to the Dagstuhl seminar on Plan Recognition Please upload titles for the talks you want to give We would like everyone to have an opportunity to give a short talk We have some panel ideas, but these are open to reconsideration – contact me

phuc
Télécharger la présentation

Welcome

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome • Welcome to the Dagstuhl seminar on Plan Recognition • Please upload titles for the talks you want to give • We would like everyone to have an opportunity to give a short talk • We have some panel ideas, but these are open to reconsideration – contact me • We will be scheduling incrementally • Scheduled through tomorrow… • Schedules will be re-posted as updated…

  2. Panel ideas • Should there be a plan recognition competition? • Rational versus fallible agents • Activity recognition, behavior recognition, plan recognition, goal recognition • Oh, my! • Full and partial observability • Generative versus plan library approaches

  3. Schedule: Monday • AM: Welcome and survey • PM: • Jerry Hobbs: discourse and plan recognition • Short talks • George Ferguson • Matthew Stone • Chris Baker: plan recognition and psychology • Panel: a plan recognition competition? • Evening: get acquainted event

  4. Schedule: Tuesday • AM: • Kathy Laskey: probabilistic methods for PR • Short talks • FrodualdKabanza • Francis Bisson • GitaSukthankar • PM: • Tom Dietterich: learning and plan recognition • Short talks • David Pattison • Nate Blaylock • Panel: Rational versus fallible agents?

  5. Plan RecognitionHistorical Survey Henry Kautz University of Rochester Robert P. Goldman SIFT, LLC Old school plan recognition… Dagstuhl, April 2011

  6. Outline • Dimensions of the plan recognition problem • Historical survey of methods • Challenges

  7. Dimensions of Plan recognition

  8. Keyhole, intended and adversarial plan recognition • Keyhole • Observer non-intrusively watches the agent • Determine how an agent’s actions contribute to achieving possible or stipulated goals • Model • World • Agent’s beliefs

  9. Keyhole, intended and adversarial plan recognition • Intended recognition • Agent acts in order to signal his beliefs and desires to other agents • Speech acts – inform, request, … • Discourse conventions • “The 3:15 train to Windsor?” • “Gate 10” [Allen & Perrault] • Symbolic actions • The Statue of Liberty • 9/11? • The agent may require a model of the observer.

  10. Keyhole, intended and adversarial plan recognition • Adversarial • Agent acts in order manipulate the observer • Deception, bluffing, misdirection, etc. … • Agent and observer will need sophisticated models of each other’s inferences

  11. Ideal versus fallible agents • Mistaken beliefs • John drives to Reagan, but flight leaves from Dulles. • The doctor bleeds the patient to cure disease. • Cognitive errors • Distracted by the radio, John drives past the exit. • Jill schedules a doctor’s appointment during her office hours. • Irrationality • John furiously blows his horn at the car in front of him.

  12. Output of plan recognition • Activity recognition • Simply identify a known behavior pattern • Goals • Recognize the objective, but not the specific recipes used • Plans • Next action the agent will take? • Best action to aid or counter the agent?

  13. Output of plan recognition: likelihood • Likelihood… • Most likely interpretation? • Distribution over plans and goals? • The above have subtly different strengths and weaknesses… • Most critical plan or goal?

  14. Richness of plans • Are actions atomic? • Or do they have parameters? • Structure (e.g., cases)? • Do plans have structure and parameters? • Coreference? • The patient of the plan will be the destination of step one and the patient of step two… • Are there plan libraries at all?

  15. Other dimensions • Reliable versus unreliable observations • “There’s a 80% chance John drove to Dulles.” • Open versus closed worlds • Fixed plan library? • Fixed set of goals? • Fixed set of entities? • Metric versus non-metric time • John enters a restaurant and leaves 1 hour later. • John enters a restaurant and leaves 5 minutes later. • Single versus multiple ongoing plans • “White knights” • Static versus evolving set of intentions • Abandoning goals: I was going to drive to the store, but the weather was too bad. • Reacting to opportunities: I was going by the playroom on the way from the laundry, so I picked up the toys.

  16. Dimensions

  17. METHODS

  18. Earliest work • Generally in service of language understanding • Often narrative understanding • Understanding indirect speech acts • Allen & Perrault, “Analyzing Intention in Utterances,” AI, 1980 • Rich vein of work using plan recognition in dialog understanding and IUI • Will be hearing more from George Ferguson later today! • Methodologically: Mostly shared early enthusiasm for rule-based systems

  19. Hypothesize & Revise • The Plan Recognition Problem C. Schmidt, 1978 • Related work from Yale AI Lab: Cullingford’s Script Applier Mechanism, Wilensky’s PAM, etc., 1978 • Charniak, Ms. Malaprop, 1978 – Frame-based and used TMS Based on psychological theories of human narrative understanding Mention of objects suggest hypothesis Pursue single hypothesis until matching fails

  20. Closed-world reasoning • A Formal Theory of Plan Recognition and its Implementation Henry Kautz, 1991 • Infers the minimum set(s) of independent plans that entail the observations • Observations may be incomplete • Infallible agent • Complete plan library • Limited to pasta preparation

  21. Parsing • Vilain 1990 --- use parsing results to characterize computational complexity of plan recognition • There were earlier attempts to parse plans • Parsing techniques closely related to Closed-world reasoning (Built on Kautz and Allen) • Find an explanation that covers all of the observations • Parsing techniques deal poorly with partial ordering, worse with interleaving • Leads to: • Later work on stochastic parsing (Pynadath and Wellman) • Attempts to exploit exotic parsing techniques (Geib)

  22. Abduction • Reason from effect to cause (C.S. Peirce) • Explanation • Diagnosis • People: • Charniak • Hobbs et al., TACITUS • Leads to interest in Bayes nets

  23. Bayes Nets • DAG-structured models of probability distributions • Came into the fore for diagnostic applications • Challenge: Static Bayes nets for complex domains can be extremely large Raining Sprinkler Grass wet

  24. Bayes Nets • Knowledge Based Model Construction: Dynamically build Bayes nets showing how plans explain actions • Multiple goals • Abstraction hierarchies • Equality reasoning for coreference • Poor treatment of time “Jack went to the liquor store.” Was he shopping? • “A Bayesian Theory of Plan Recognition,”Charniak and Goldman, AIJ, 1993. • “Interpretation as Abduction,” Hobbs, Stickel, Martin & Edwards, Proc. ACL, 1988.

  25. More on Bayes net methods • Laskey and her colleagues have worked on military domains • Further developed KBMC techniques (e.g. query completeness); coreference, identity uncertainty • Many related techniques • E.g., Hobbs et al. cost-based abduction • ATMSes (d’Ambrosio, Provan, Charniak & Goldman) • Horn logic (Poole)

  26. Pending sets Explicitly models the agent’s “plan agenda” using Poole’s “probabilistic Horn abduction” rules Bridge between Bayes net and HMM frameworks Handles multiple concurrent interleaved plans & negative evidence Number of different possible pending sets can grow exponentially • A new model of plan recognition. Goldman, Geib, and Miller,1999 • “A probabilistic plan recognition algorithm based on plan tree grammars,”Geib and Goldman, AIJ, 2009. Pending(P’,T+1) Pending(P,T), Leaves(L), Progress(L, P, P’, T+1). Happen(X,T+1)  Pending(P,T), X in P, Pick(X,P,T+1).

  27. Version Space Algebra • A sound and fast goal recognizer Lesh & Etzioni, IJCAI 1995 • Programming by Demonstration Using Version Space Algebra Lau, Wolfman, Domingos, Weld. • Related to later work on plan-recognition through planning • Recognizes novel plans • Complete observations

  28. CHALLENGES

  29. Evaluation • Ground truth • Difficult to get labeled data • Epistemic question --- do our proposed labelings correspond to any real ground truth? • Prediction tasks • Next action? • Future action? • Good choice of assistive action? Countermeasure? • Can prediction act as proxy for ground truth?

  30. Epistemic question • What is the status of the recipes that we postulate as explanations for actions? • Are they taken as being real in some sense? • Corresponding to mental contents? • Identified regularities that really exist in the world? • Data structures that just exist for our convenience

  31. Computational difficulties • Computational complexity • Theoretical results • Practical results • Challenges from domains • Some domains inherently ambiguous • Adversarial reasoning • Do we need game-theoretic reasoning • Cooperative as well as adversarial

  32. Plan libraries • Engineered? • Learned? • Something in between? • Learned ones often seem impoverished • Engineering seems impossible!

  33. Learning • Structural learning • Learn the contents of plan libraries (in one form or another) • Parameter learning • Adjust parameters of known libraries • Both offer challenges related to those of evaluation • Plan recognition may be done in service of learning, as well as the other way around. • Infer goals to learn novel recipes

  34. Imperfections • Imperfect agents • Imperfect information • Imperfect reasoning • Imperfect task performance • Challenging for non-empirical algorithms • Imperfect observations • Imperfect models • Including seemingly-irrelevant actions

  35. User models • In many domains, the behaviors exhibited are not just a function of the actions, goals and plans, but agent characteristics, as well. • Developing clean ways to combine agent-dependent and – independent information is a challenge going forward. • Often per-agent training is unacceptable.

  36. Sensing • In many cases it is difficult to sense the agents’ actions: • Labeling actions in primitive sensor data • Vision • Network packets • Linguistic utterances • Hardware/software hybrid systems • E.g., oil refinery --- user can go out and use a wrench un-observed • Conventional software • Even Horvitz et al. report difficulties “seeing” actions of Microsoft Office users • Mixed streams • Individual actions in network packet streams

  37. Coreference and quantification • In some domains we don’t have object identity and permanence and the number of agents simply handed to us. • Story understanding • Military situation interpretation • Identity hypotheses enter into plan recognition

  38. Anomaly detection • Often appealed to as a solution for detecting some phenomenon that is difficult to model: • Intrusion behavior in computer security • Terrorist behavior in tracking and camera data • Dementia-induced behavior in tracking elderly subjects • Accuracy requires deep understanding of the models’ properties • Stationarity (often violated in computer security) • “Size” and “shape” of normal behaviors • As always, it’s hard to get something for nothing.

  39. The Role of State • Many (but not all) plan recognition systems represent only the state of the planning agent. • The state of the environment is modeled implicitly, if at all.

  40. Groups • Teamwork • Friendly: recognize teammates’ intentions to coordinate and aid • Hostile: recognize opponents’ intentions to hinder and obstruct • Role recognition

  41. Hypothesis retrieval • Some early work assumed that there were enough candidate hypotheses that retrieval could be an issue

  42. Predictive and explanatory inference • A lot of concern in early work about combining top-down and bottom-up inference

  43. Actions with weak diagnostic power • E.g., computer security • We would desperately like to know the attacker’s motivations • But what do we do with • Get access to the target • Gain administrator privileges on the target…

  44. Coffee and then henry’s turn…

More Related