1 / 21

Softbot Planning

Softbot Planning. Christophe Bisciglia 5/16/2003. Agenda. Issues with Softbot Planning Representation (KPC & SADL) PUCCINI (Observation/Verification Links) What’s Next?. Challenges to Softbots. Can traditional planners be used? How do you represent information? Is it complete?

shania
Télécharger la présentation

Softbot Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Softbot Planning Christophe Bisciglia 5/16/2003

  2. Agenda • Issues with Softbot Planning • Representation (KPC & SADL) • PUCCINI (Observation/Verification Links) • What’s Next?

  3. Challenges to Softbots • Can traditional planners be used? • How do you represent information? • Is it complete? • Uncertainty • Can it be modeled Probabilistically? • Should it be? • What are “actions” and “goals” for Softbots? • How do you ensure sanity (reasonable plans)?

  4. Motivating Examples • Personal Agents • Find me a few recent articles on “The Matrix Reloaded” • Find all .java files under my home directory, compile them, and make a JAR • Larger Scale Agents • Comparison Shopping Agents • Intelligent Spiders

  5. A Softbot’s World • Can you make Open/Closed world assumptions? • How can you describe information? • Bounded? • Complete? • Correct?

  6. Given unbounded, incomplete information • How do you figure out “the truth”? • What assumptions can be made about quality? • Is assuming truth reasonable? • How does assuming incomplete, but correct information limit domains? • Good/Bad examples

  7. Local Closed Word Information(LCW) • Make the Closed Word Assumption about local areas of the world • How can this idea be used to determine “Ф is F” • When is something still unknown?

  8. LCW - Formally • Set of Ground Literals DM • LCW Formulas DF • Of the form LCW(Ф) • IE: LCW(parent.dir(f,/tex)) • Means DM contains all files in /tex • Ф() is a LCW formula Ф with the set of variables  substituted • Ф()  DM  Truth-Value(Ф)  {T,F} • Ф  DM  Truth-Value(Ф() )  {F,U} • LCW(Ф)  DF  Ф() is F • LCW(Ф)  DF  Ф() is U

  9. LCW Example • Consider the following • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b)} • DF = {} • What do we know about the world? • What happens of the planner executes: ls –a /papers • Results: 241b aips.tex, 187 b TPSreport.tex • New state: • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} • DF = {parent.dir(f, /papers) /\ length(f,l)}

  10. LCW Example Continued • State: • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} • DF = {parent.dir(f, /papers) /\ length(f,l)} • How do we conclude: • paret.dir(aips.tex, /papers) /\ length(aips.tex, 241 b) • parent.dir(AAAI.tex, /papers) /\ length(AAAI.tex, 921 b) • paent.dir(memo.tex, /memos) /\ length(memo.tex, 71 b)

  11. LCW and Universal Quantification (briefly) • How could LCW be used for universally quantified effects? • Example: compress all files in /papers • !f parent.dir(!f, /papers) satisfy(compressed(!f)) • Plan: • Obtain LCW(parent.dir(f, /papers)) • For each f in Dm where parent.dir(f, /papers) is true, compress f • How do we know this works?

  12. Allows Agent to make local conclusions Prevents Redundant Sensing – how? Universal Quantification Others? What about “mv aips.tex foo” when we have LCW on foo – Do we need to re-sense? Bookkeeping Others? LCW Pros & Cons Conclusion: Overall, LCW is great for Softbot Planning

  13. Knowledge Representation • Classical Knowledge Pre-Conditions • Requires a priori knowledge that an action causes some effect • Can’t really “check if X is the case” – Consider the safe combination example • Problems with KPCs • “Representational Handcuffs” for sensing • How do you represent “I may or may not see X” – and then plan accordingly? • Why not just build contingent plans? • Would POMDPs work?

  14. SADL = UWL + ADL • Designed to represent sensing actions and information goals • Eliminates Knowledge Pre-Conditions • Generalizes causal links • Categorizes effects and goals • Runtime variables (preceded by !)

  15. SADL Actions • Causal Actions • Actions that change the world • IE: ? • Observational Actions • Actions that report the state of the world • IE: ?

  16. SADL Goals • Satisfy Goals • Traditional Goals • Satisfy my any means possible • Initially Goals • Similar, but refers to when goal was given to agent, not when the goal is achieved • Initially (p,!tv) means by the time the plan is complete, the agents should know whether it was true when it started • What do initially goals allow?

  17. SADL Goals continued… • Hands-Off Goals • Prohibits agent from modifying fluents involved • What does this do for us? • Consider this example • Goal: Delete core file • Plan: • mv TPS-report.tex core • rm core • Remember, agents are very resourceful 

  18. General Causal Links • Two types discussed in PUCCINI paper • Observational Links: Ae-e,p->Ap • The effect e from A1is an observe effect needed by p for A2 • Verification Links: Ap<-p,e-Ae • Action Ap needs p to be verified by the effect e from Ae • What’s the difference? • What happens if we remove the ordering constraint as the paper suggests? • What does this do to the search space?

  19. What next? • This planner is a few years old, what new technologies might be used? • What assumptions could we relax?

  20. In Conclusion… • LCW is a compromise between open and closed world assumptions. • LCW prevents redundant sensing • LCW facilitates universal quantification • SADL is great when you need to describe sensing and ensure reasonable plans • Generalizing causal links gives planner more options without greatly increasing complexity

More Related