1 / 10

Thinking about Evaluation and Corpora for Plan Recognition

Thinking about Evaluation and Corpora for Plan Recognition. Nate Blaylock Florida Institute for Human and Machine Cognition (IHMC) Ocala, Florida blaylock@ihmc.us. Plan Recognition Evaluation. Extrinsic (Tom Dietterich’s comment) Online prediction after each observation Precision/recall

blake
Télécharger la présentation

Thinking about Evaluation and Corpora for Plan Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thinking about Evaluation and Corpora for Plan Recognition Nate Blaylock Florida Institute for Human and Machine Cognition (IHMC) Ocala, Florida blaylock@ihmc.us

  2. Plan Recognition Evaluation • Extrinsic (Tom Dietterich’s comment) • Online • prediction after each observation • Precision/recall • ability to predict “don’t know” • Offline • predict right answer for the session • convergence

  3. More Evaluation • How early in session do we get it? • convergence point (Lesh – “work saved”) • Partial results (often enough) • Lower subgoals in HTN plan • More abstract (subsuming) goals • Schema only or only some parameters • N-best prediction

  4. Example: Results on Monroe 1 Best 2 Best

  5. Plan Corpora: Types • Unlabeled: sequence of actions taken, e.g., • Unix commands (Davidson and Hirsh’98) • also GPS data (e.g., Patterson et al. 2003) • Goal-labeled: actions + top-level goal(s), e.g., • MUD domain (Albrecht et al. ’98) • Unix/Linux (Lesh ’98, Blaylock and Allen 2004) • Linux Plan Corpus available online

  6. Plan Corpora: Types (2) • Plan-labeled: actions + hierarchical plan • Monroe Plan Corpus (Blaylock and Allen 2005) • available online • (future?) Problem-solving labeled • Action failure, replanning, goal abandonment, ...

  7. Creating Plan Corpora (from humans) Human annotation of everything, OR • Action sequence: record observations directly • Top-level goal(s): • idea 1: environment where goal achievement observable (e.g., MUD) • idea 2: controlled environment where goal is known a priori (e.g., Unix/Linux) • Plan-labeled: • annotate with existing plan recognizer (Bauer ’96) • May not apply to all domains

  8. Generating Artificial Corpora(Blaylock and Allen, 2005) • Randomized AI planner (SHOP2) • Model domain for planner (HTN) • For each desired plan session • stochastically generate goal(s) • stochastically generate start state • find plan using planner

  9. Using the Method: The Monroe Corpus • Emergency planning domain • 10 top-level goal schemas • 46 methods (recipes) • 30 operators (subgoals/actions) • Average depth to action: 3.8 • 5000 plan sessions generated in less than 10 minutes – plan-labeled corpus • Download at • http://cs.rochester.edu/research/speech/monroe-plan/

  10. Future Directions • Problem-solving labeled corpus • Similar method to Monroe • Build stochastic agent to do problem solving in domain with plan monitoring, replanning, goal abandonment, etc. • Label steps where PS behavior happened • cf. (Rosario, Oliver, and Pentland, 1999)

More Related