210 likes | 299 Vues
Explore issues, representation methods, and challenges in softbot planning. Learn about traditional planners, uncertainty modeling, and ensuring reasonable plans. Discover strategies such as LCW and SADL for effective knowledge representation.
E N D
Softbot Planning Christophe Bisciglia 5/16/2003
Agenda • Issues with Softbot Planning • Representation (KPC & SADL) • PUCCINI (Observation/Verification Links) • What’s Next?
Challenges to Softbots • Can traditional planners be used? • How do you represent information? • Is it complete? • Uncertainty • Can it be modeled Probabilistically? • Should it be? • What are “actions” and “goals” for Softbots? • How do you ensure sanity (reasonable plans)?
Motivating Examples • Personal Agents • Find me a few recent articles on “The Matrix Reloaded” • Find all .java files under my home directory, compile them, and make a JAR • Larger Scale Agents • Comparison Shopping Agents • Intelligent Spiders
A Softbot’s World • Can you make Open/Closed world assumptions? • How can you describe information? • Bounded? • Complete? • Correct?
Given unbounded, incomplete information • How do you figure out “the truth”? • What assumptions can be made about quality? • Is assuming truth reasonable? • How does assuming incomplete, but correct information limit domains? • Good/Bad examples
Local Closed Word Information(LCW) • Make the Closed Word Assumption about local areas of the world • How can this idea be used to determine “Ф is F” • When is something still unknown?
LCW - Formally • Set of Ground Literals DM • LCW Formulas DF • Of the form LCW(Ф) • IE: LCW(parent.dir(f,/tex)) • Means DM contains all files in /tex • Ф() is a LCW formula Ф with the set of variables substituted • Ф() DM Truth-Value(Ф) {T,F} • Ф DM Truth-Value(Ф() ) {F,U} • LCW(Ф) DF Ф() is F • LCW(Ф) DF Ф() is U
LCW Example • Consider the following • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b)} • DF = {} • What do we know about the world? • What happens of the planner executes: ls –a /papers • Results: 241b aips.tex, 187 b TPSreport.tex • New state: • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} • DF = {parent.dir(f, /papers) /\ length(f,l)}
LCW Example Continued • State: • DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} • DF = {parent.dir(f, /papers) /\ length(f,l)} • How do we conclude: • paret.dir(aips.tex, /papers) /\ length(aips.tex, 241 b) • parent.dir(AAAI.tex, /papers) /\ length(AAAI.tex, 921 b) • paent.dir(memo.tex, /memos) /\ length(memo.tex, 71 b)
LCW and Universal Quantification (briefly) • How could LCW be used for universally quantified effects? • Example: compress all files in /papers • !f parent.dir(!f, /papers) satisfy(compressed(!f)) • Plan: • Obtain LCW(parent.dir(f, /papers)) • For each f in Dm where parent.dir(f, /papers) is true, compress f • How do we know this works?
Allows Agent to make local conclusions Prevents Redundant Sensing – how? Universal Quantification Others? What about “mv aips.tex foo” when we have LCW on foo – Do we need to re-sense? Bookkeeping Others? LCW Pros & Cons Conclusion: Overall, LCW is great for Softbot Planning
Knowledge Representation • Classical Knowledge Pre-Conditions • Requires a priori knowledge that an action causes some effect • Can’t really “check if X is the case” – Consider the safe combination example • Problems with KPCs • “Representational Handcuffs” for sensing • How do you represent “I may or may not see X” – and then plan accordingly? • Why not just build contingent plans? • Would POMDPs work?
SADL = UWL + ADL • Designed to represent sensing actions and information goals • Eliminates Knowledge Pre-Conditions • Generalizes causal links • Categorizes effects and goals • Runtime variables (preceded by !)
SADL Actions • Causal Actions • Actions that change the world • IE: ? • Observational Actions • Actions that report the state of the world • IE: ?
SADL Goals • Satisfy Goals • Traditional Goals • Satisfy my any means possible • Initially Goals • Similar, but refers to when goal was given to agent, not when the goal is achieved • Initially (p,!tv) means by the time the plan is complete, the agents should know whether it was true when it started • What do initially goals allow?
SADL Goals continued… • Hands-Off Goals • Prohibits agent from modifying fluents involved • What does this do for us? • Consider this example • Goal: Delete core file • Plan: • mv TPS-report.tex core • rm core • Remember, agents are very resourceful
General Causal Links • Two types discussed in PUCCINI paper • Observational Links: Ae-e,p->Ap • The effect e from A1is an observe effect needed by p for A2 • Verification Links: Ap<-p,e-Ae • Action Ap needs p to be verified by the effect e from Ae • What’s the difference? • What happens if we remove the ordering constraint as the paper suggests? • What does this do to the search space?
What next? • This planner is a few years old, what new technologies might be used? • What assumptions could we relax?
In Conclusion… • LCW is a compromise between open and closed world assumptions. • LCW prevents redundant sensing • LCW facilitates universal quantification • SADL is great when you need to describe sensing and ensure reasonable plans • Generalizing causal links gives planner more options without greatly increasing complexity