90 likes | 294 Vues
GraphPlan. (you don't have to know anything about this, but I put it here to point out that there are many clever approaches to try to find plans more efficiently; another would be OBDD, ordered binary decision diagrams). SatPlan. Encode as propositional Sat Add time index
E N D
GraphPlan • (you don't have to know anything about this, but I put it here to point out that there are many clever approaches to try to find plans more efficiently; another would be OBDD, ordered binary decision diagrams)
SatPlan • Encode as propositional Sat • Add time index • At(p1,SFO)0&At(p2,JFK)0 • Axioms: one for each plane X city X time • Actions: Fly(p1,SFO,JFK)1, Fly(p1,JFK,SFO)1, Fly(p1,JFK)2... • Guess a time and assert goal • at(p1,JFK)t&at(p2,SFO)t • Try for t=0..Tmax • Use DPLL to find a model • Look at truth values for actions Pre-cond axioms: Effect/successor-state axioms: Exclusion axioms: -Fly(p1,JFK,SFO)0 v -Fly(p1,JFK,LAX)0 one destination at a time -Fly(p1,a,b)0 v -Fly(p2,c,d)0 one at a time? Is satisfiable if...
Scheduling • Find plan that minimizes total time • Assume each operator has a known duration • Strategy • Construct partial-order plan first • Then find an optimal linearization • CPM – critical path method • identify max-duration sub-sequence; schedule actions without any delay/slack • More difficult with resource constraints • e.g. 1 drill press in shop, 1 inspector, 1 data-bus
CPM • ES: earliest start time, LS: latest start time, slack = LS-ES • Forward pass: ES(B)=max(A<B) ES(A)+duration(A) • Backward pass: LS(A)=min(A<B) LS(B)-duration(A)
Hierarchical Task Networks • Plan libraries • Make pizza={roll dough,apply sauce,add toppings,put in oven} • Standard operation procedures, alternative methods • Abstraction (leave out pre-conditions) • Example: fly to destination city, take “transport” to hotel, expand it later • Opportunities for interleaving/sharing? • Example: building a house • Can use POP to expand nodes in graph with sub-graphs
Conditional Planning • non-determinism – actions that could fail; external events could change state of world • uncertainty, partial observability • can’t see in rooms not yet visited (e.g. Wumpus) • don’t know what resources will be available, or used • approaches • build “robust” plans that will work regardless • use sensors and conditional/contingent actions • keep chopping tree “until it falls” • effect of sensing action is to change “belief state” • use probabilities to find plan most likely to succeed • Markov Decision Problems (MDP’s) • maximize reward accumulated over time, subject to probability distribution over successor states
Plan Monitoring and Repair • Non-determinism: what if actions fail, or unanticipated external events happen? • Check pre-conds of each step • If not sat, re-plan from new current state to goal? • Make plan to get back to nearest state in original plan? • Serenditiy – check pre-conds of every step • Futility – re-executing same action might not help (e.g. Prometheus)
Additional topics • Plan optimization - Goal is not only to find any plan, but to find one that minimizes cost, probability of success... • Dealing with uncertainty • Probability, Markov Decision Problems, reward • Partial observability – keep track of belief states • Multi-agent planning – e.g. 2 cooperating agents in BlocksWorld • Reactive planning – build a state/action table (quick rules to eval) to handle all situations and lead to goal