1 / 35

SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT

SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT. Lin Xu, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown University of British Columbia {xulin730, hutter, hoos, kevinlb}@cs.ubc.ca. Outline. Motivation History of SATzilla and related work SATzilla methodology

fia
Télécharger la présentation

SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT Lin Xu, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown University of British Columbia {xulin730, hutter, hoos, kevinlb}@cs.ubc.ca

  2. Outline • Motivation • History of SATzilla and related work • SATzilla methodology • Example Problem • SATzilla for the SAT Competition • Conclusions and ongoing research

  3. Motivation • Lots of high performance solvers, but …. • No single SAT solver dominates all others on all types of instances • Question: How to select the best solver for a given SAT instance?

  4. Algorithm Selection Problem [Rice, 1976] • Reference: Select solvers based on previous experience or research papers • “Winner-Take-All”: Test solvers on some samples from target distribution; select the solver with best performance. • SATzilla: Automatically based on instance characteristics’

  5. Related work: • Portfolio of stochastic algorithm[Gomes & Selman,1997] • Running multiple algorithms at the same time • Reinforcement learning[Lagoudakis & Littman, 2001] • Select branching rule at each decision point • Branch & bound algorithm selection [Lobjois & LemaÎter, 1998] • Based on an estimation of search tree size

  6. History of SATzilla • Old SATzilla [Nudelman, Devkar, et. al, 2003 ] • 2nd Random • 2nd Handmade (SAT) • 3rd Handmade • SATzilla-07 • 1st Handmade • 1st Handmade (UNSAT) • 1st Random • 2nd Handmade (SAT) • 3rd Random (UNSAT)

  7. SATzilla-07 Methodology (offline) Winner- Take-All Solvers Target Distribution Collect Data Build Model Features Final solver selection Pre-Solvers

  8. SATzilla-07 Methodology (online) Run Pre-solver Compute Features Predict Runtime Run Best If error Run Winner- Take-All If error AND time left; run second best 8

  9. Solvers Used Eureka [Nadel, Gordon, Palti & Hanna, 2006] Kcnfs2006 [Dubois & Dequen, 2006] March_dl2004 [Heule & Maaren, 2006] Minisat2.0 [Eén & Sörensson, 2006] OKsolver [Kullmann, 2002] Rsat [Pipatsrisawat & Darwiche, 2006] Vallst [Vallstrom, 2005] Zchaff_Rand [Mahajan, Fu & Malik, 2005] 9

  10. SATzilla-07 Example Using quasi-group completion Problems (QCP) to validate our general approach

  11. SATzilla-07 Example Problem • Problem distribution • QCP problems generated near phase transition [Gomes & Selman, 1997] • Solvers • Eureka, OKsolver, Zchaff_Rand • Features • Same as in previous work [Nudelman, et al. 2004] • Collect Data • Compute instances’ features and determine solvers’ runtime • Pre-Solver& “Winner take all” • Build Models • Final solver selection

  12. fw() = wT • 23.34 • 7.21 • … • … Features () Runtime (y) Empirical Hardness Model (EHM) • The Core of SATzilla --- EHM • Accurately predict algorithm’s runtime based on cheaply computable features • Linear basis function regression

  13. Improve EHM (deal with censoring) • Heavy-tailed behavior and censoring • Three ways for censored data • Drop them • Keep them as if finished at cutoff • Censored sampling • Schmee & Hahn ‘s approach[1979] REPEAT • Estimate runtime conditional on EHM and real runtime bigger than cutoff runtime • Build new EHM with estimated runtime UNTIL no more changes in EHM

  14. How to deal with censored data Censor point • A: Drop them • B: Finished at cutoff • C: Censor sampling 14

  15. How to deal with censored data Censor point Censor point • A: Drop them • B: Finished at cutoff • C: Censor sampling 15

  16. How to deal with censored data Censor point Censor point • A: Drop them • B: Finished at cutoff • C: Censor sampling 16

  17. Improve EHM (using Hierarchal Hardness Models ) EHM often more accurate, much simpler when trained with SAT/UNSAT samples only[Nudelman, et al. 2004] Building hierarchal hardness models byapproximate model selection Oracle Mixture of experts problem with fixed experts 17

  18. SATzilla-07 for QCP Average Runtime

  19. SATzilla-07 for QCP Empirical CDF

  20. 2007 SAT Competition Three submissions for 2007 SAT Competition BIG_MIX for all three category (demo) RANDOM HANDMADE

  21. SATzilla-07 for SAT Competition • Target Distribution • Previous SAT competition and SAT Race • Solver (with/without preprocessing, Hyper) • Eureka, Kcnfs2006, March_dl2004, Minisat2.0 Vallsat, Rsat, Zchaff_Rand • Features • Reduce probing time to 1 second • Only cheap features, total about 3 seconds • Pre-Solvers • March_dl 5 seconds, SAPS 2 seconds

  22. SATzilla-07 for SAT Competition • “Winner take all” solver • March_dl2004 • Final candidates • BIG_MIX Eureka, Kcnfs2006, March_dl2004, Rsat • RANDOM March_dl2004, Kcnfs2006, Minisat2.0+ • HANDMADE March_dl2004, Vallst, March_dl2004+, Minisat2.0+, Zchaff_Random+ 22

  23. SATzilla-07 for BIG_MIX

  24. SATzilla-07 for BIG_MIX Feature time Pre-solvers 24

  25. SATzilla-07 for RANDOM 25

  26. SATzilla-07 for RANDOM Feature time Pre-solvers 26

  27. SATzilla-07 for HANDMADE 27

  28. SATzilla-07 for HANDMADE Feature time Pre-solvers 28

  29. Conclusions

  30. Conclusions • Can combine algorithms into portfolios, improving performance and robustness • SATzilla approach has been proven to be successful in real world competition • With more training data and more solvers, SATzilla can be even better

  31. Ongoing research • SATzilla for industrial category • Use the same approach, SATzilla is 25% faster and solves 5% more instances • Score function • Optimize objective function other than runtime • Local search • Improve SATzilla performance by using local search solvers as component

  32. Special Thanks Creators of solvers Alexander Nadel, Moran Gordon, Amit Palti and Ziyad Hanna (Eureka) Marijn Heule, Hans van Maaren (March_dl2004) Niklas Eén, Niklas Sörensson (Minisat2.0) Oliver Kullmann (OKsolver) Knot Pipatsrisawat and Adnan Darwiche (Rsat 1.04) Daniel Vallstrom (Vallst) Yogesh S. Mahajan, Zhaohui Fu and Sharad Malik (Zchaff_Rand) 32

  33. SATzilla Pick for BIG_MIX

  34. SATzilla Pick for RANDOM

  35. SATzilla Pick for HANDMADE

More Related