1 / 135

Approximation Techniques for Automated Reasoning

Approximation Techniques for Automated Reasoning. Irina Rish IBM T.J.Watson Research Center rish@us.ibm.com. Rina Dechter University of California, Irvine dechter@ics.uci.edu . Outline. Introduction Reasoning tasks Reasoning approaches: elimination and conditioning

delta
Télécharger la présentation

Approximation Techniques for Automated Reasoning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximation Techniques for Automated Reasoning Irina Rish IBM T.J.Watson Research Center rish@us.ibm.com Rina Dechter University of California, Irvine dechter@ics.uci.edu

  2. Outline • Introduction • Reasoning tasks • Reasoning approaches: elimination and conditioning • CSPs: exact inference and approximations • Belief networks: exact inference and approximations • MDPs: decision-theoretic planning • Conclusions SP2

  3. Automated reasoning tasks • Propositional satisfiability • Constraint satisfaction • Planning and scheduling • Probabilistic inference • Decision-theoretic planning • Etc. Reasoning is NP-hard Approximations SP2

  4. Graphical Frameworks • Our focus - graphical frameworks: constraint and belief networks • Nodes variables • Edges dependencies (constraints, probabilities, utilities) • Reasoning graph transformations SP2

  5. Propositional Satisfiability Example: party problem • If Alex goes, then Becky goes: • If Chris goes, then Alex goes: • Query: Is it possible that Chris goes to the party but Becky does not? SP2

  6. Constraint Satisfaction Example: map coloring Variables - countries (A,B,C,etc.) Values - colors (e.g., red, green, yellow) Constraints: SP2

  7. Constrained Optimization Example: power plant scheduling SP2

  8. Probabilistic Inference Example: medical diagnosis smoking visit to Asia V S lung cancer T B C bronchitis tuberculosis abnormality in lungs A X D dyspnoea (shortness of breath) X-ray Query: P(T = yes | S = no, D = yes) = ? SP2

  9. Decision-Theoretic Planning Example: robot navigation • State = {X, Y, Battery_Level} • Actions = {Go_North, Go_South, Go_West, Go_East} • Probability of success = P • Task: reach the goal location ASAP SP2

  10. Reasoning Methods • Our focus - conditioningandelimination • Conditioning (“guessing” assignments, reasoning by assumptions) • Branch-and-bound (optimization) • Backtracking search (CSPs) • Cycle-cutset (CSPs, belief nets) • Variable elimination (inference, “propagation” of constraints, probabilities, cost functions) • Dynamic programming (optimization) • Adaptive consistency (CSPs) • Joint-tree propagation (CSPs, belief nets) SP2

  11. 0 Conditioning: Backtracking Search SP2

  12. Bucket EliminationAdaptive Consistency (Dechter & Pear, 1987) = ¹ D = C A ¹ C B = A contradiction = Bucket E: E ¹ D, E ¹ C Bucket D: D ¹ A Bucket C: C ¹ B Bucket B: B ¹ A Bucket A: SP2

  13. Bucket-elimination and conditioning: a uniform framework • Unifying approach to different reasoning tasks • Understanding: commonality and differences • “Technology transfer” • Ease of implementation • Extensions to hybrids: conditioning+elimination • Approximations SP2

  14. Exact CSP techniques: complexity SP2

  15. Approximations • Exact approaches can be intractable • Approximate conditioning • Local search, gradient descent (optimization, CSPs, SAT) • Stochastic simulations (belief nets) • Approximate elimination • Local consistency enforcing (CSPs), local probability propagation (belief nets) • Bounded resolution (SAT) • Mini-bucket approach (belief nets) • Hybrids (conditioning+elimination) • Other approximations (e.g., variational) SP2

  16. “Road map” • CSPs: complete algorithms • Variable Elimination • Conditioning (Search) • CSPs: approximations • Belief nets: complete algorithms • Belief nets: approximations • MDPs SP2

  17. Constraint Satisfaction • Planning and scheduling • Configuration and design problems • Circuit diagnosis • Scene labeling • Temporal reasoning • Natural language processing Applications: SP2

  18. E A B red green red yellow green red green yellow yellow green yellow red A D B F G C Constraint Satisfaction Example: map coloring Variables - countries (A,B,C,etc.) Values - colors (e.g., red, green, yellow) Constraints: SP2

  19. Constraint Networks SP2

  20. eliminating E C RDBC D 3 value assignment B The Idea of Elimination SP2

  21. Variable Elimination Eliminate variables one by one: “constraint propagation” Solution generation after elimination is backtrack-free SP2

  22. Elimination Operation:join followed by projection Join operation over A finds all solutions satisfying constraints that involve A SP2

  23. E D || RDCB C || RACB || RAB B RA A A || RDB D C B E Bucket EliminationAdaptive Consistency(Dechter and Pearl, 1987) RCBE || RDBE , || RE SP2

  24. Induced Width Width along ordering d: max # of previous neighbors (“parents”) Induced width The width in the ordered induced graph, obtained by connecting “parents” of each recursively, from i=n to 1. SP2

  25. Induced width (continued) • Finding minimum- ordering is NP-complete (Arnborg, 1985) • Greedy ordering heuristics: min-width, min-degree, max-cardinality (Bertele and Briochi, 1972; Freuder 1982) • Tractable classes: trees have • of an ordering is computed in O(n) time, i.e. complexity of elimination is easy to predict SP2

  26. Example: crossword puzzle SP2

  27. Crossword Puzzle:Adaptive consistency SP2

  28. Adaptive Consistencyas “bucket-elimination” Initialize: partition constraints into For i=n down to 1 // process buckets in the reverse order for all relations do // join all relations and “project-out” If is not empty, add it to where k is the largest variable index in Else problem is unsatisfiable Return the set of all relations (old and new) in the buckets SP2

  29. Solving Trees (Mackworth and Freuder, 1985) Adaptive consistency is linear for trees and equivalent to enforcing directionalarc-consistency (recording only unary constraints) SP2

  30. Properties of bucket-elimination(adaptive consistency) • Adaptive consistency generates a constraint network that is backtrack-free (can be solved without deadends). • The time and space complexity of adaptive consistency along ordering d is . • Therefore, problems having bounded induced width are tractable (solved in polynomial time). • Examples of tractable problem classes: trees ( ), series-parallel networks ( ), and in general k-trees ( ). SP2

  31. “Road map” • CSPs: complete algorithms • Variable Elimination • Conditioning (Search) • CSPs: approximations • Belief nets: complete algorithms • Belief nets: approximations • MDPs SP2

  32. The Idea of Conditioning SP2

  33. Backtracking Search+Heuristics “Vanilla” backtracking + variable/value ordering Heuristics + constraint propagation + learning +… • Look-ahead schemes • Forward checking (Haralick and Elliot, 1980) • MAC (full arc-consistency at each node) (Gashnig 1977) • Look back schemes • Backjumping (Gashnig 1977, Dechter 1990, Prosser 1993) • Backmarking (Gashnig 1977) • BJ+DVO (Frost and Dechter, 1994) • Constraint learning (Dechter 1990, Frost and Dechter 1994, Bayardo and Miranker 1996) SP2

  34. Search complexity distributions Complexity histograms (deadends, time) => continuous distributions (Frost, Rish, and Vila 1997; Selman and Gomez 1997, Hoos 1998) Frequency (probability) nodes explored in the search space SP2

  35. Constraint Programming • Constraint solving embedded in programming languages • Allows flexible modeling + with algorithms • Logic programs + forward checking • Eclipse, Ilog, OPL • Using only look-ahead schemes. SP2

  36. Complete CSP algorithms: summary • Bucket elimination: • adaptive consistency (CSP), directional resolution (SAT) • elimination operation: join-project (CSP), resolution (SAT) • Time and space exponential in the induced width (given a variable ordering) • Conditioning: • Backtracking search+heuristics • Time complexity: worst-case O(exp(n)), but average-case is often much better. Space complexity: linear. SP2

  37. “Road map” • CSPs: complete algorithms • CSPs: approximations • Approximating elimination • Approximating conditioning • Belief nets: complete algorithms • Belief nets: approximations • MDPs SP2

  38. Approximating Elimination:Local Constraint Propagation • Problem: bucket-elimination algorithms are intractable when induced width is large • Approximation: bound the size of recorded dependencies, i.e. perform local constraint propagation (local inference) • Advantages: efficiency; may discover inconsistencies by deducing new constraints • Disadvantages: does not guarantee a solution exist SP2

  39. From Global to Local Consistency SP2

  40. Constraint Propagation • Arc-consistency, unit resolution, i-consistency X Y  1, 2, 3 1, 2, 3 1  X, Y, Z, T  3 X  Y Y = Z T  Z X  T  = 1, 2, 3 1, 2, 3  T Z SP2

  41. 1 3 2 3 Constraint Propagation • Arc-consistency, unit resolution, i-consistency X Y  1  X, Y, Z, T  3 X  Y Y = Z T  Z X  T  =  T Z • Incorporated into backtracking search • Constraint programming languages powerful approach for modeling and solving combinatorial optimization problems. SP2

  42. Arc-consistency Only domain constraints are recorded: Example: SP2

  43. Local consistency: i-consistency • i-consistency: Any consistent assignment to any i-1 variables is consistent with at least one value of any i-th variable • strong i-consistency: k-consistency for every • directional i-consistency Given an ordering, each variable is i-consistent with any i-1 preceding variables • strong directional i-consistency Given an ordering, each variable is strongly i-consistent with any i-1 preceding variables SP2

  44. E E E E D C D D D C C C B B B B A Directional i-consistency Adaptive d-path d-arc SP2

  45. Enforcing Directional i-consistency • Directional i-consistencybounds the size of recorded constraints by i. • i=1 - arc-consistency • i=2 - path-consistency • For , directional i-consistency is equivalent to adaptive consistency SP2

  46. Example: SAT • Elimination operation – resolution • Directional Resolution – adaptive consistency (Davis and Putnam, 1960; Dechter and Rish, 1994) • Bounded resolution – bounds the resolvent size • BDR(i) – directional i-consistency (Dechter and Rish, 1994) • k-closure – full k-consistency (van Gelder and Tsuji, 1996) • In general: bounded induced-width resolution • DCDR(b) – generalizes cycle-cutset idea: limits induced width by conditioning on cutset variables (Rish and Dechter 1996, Rish and Dechter 2000) SP2

  47. Directional Resolution Adaptive Consistency SP2

  48. DR complexity SP2

  49. History • 1960 – resolution-based Davis-Putnam algorithm • 1962 – resolution step replaced by conditioning (Davis, Logemann and Loveland, 1962) to avoid memory explosion, resulting into a backtracking search algorithm known as Davis-Putnam (DP), or DPLL procedure. • The dependency on induced width was not known in 1960. • 1994 – Directional Resolution (DR), a rediscovery of the original Davis-Putnam, identification of tractable classes (Dechter and Rish, 1994). SP2

  50. DR versus DPLL: complementary properties (k,m)-tree 3-CNFs (bounded induced width) Uniform random 3-CNFs (large induced width) SP2

More Related