1 / 43

The Smoothed Analysis of Algorithms

The Smoothed Analysis of Algorithms. Daniel A. Spielman MIT. With Shang-Hua Teng (Boston University) John Dunagan (Microsoft Research) and Arvind Sankar (Goldman Sachs). Outline. Why?. What?. The Simplex Method. Gaussian Elimination. Other Problems. Conclusion. Problem:

mike_john
Télécharger la présentation

The Smoothed Analysis of Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Smoothed Analysis of Algorithms Daniel A. Spielman MIT With Shang-Hua Teng (Boston University) John Dunagan (Microsoft Research) and Arvind Sankar (Goldman Sachs)

  2. Outline Why? What? The Simplex Method Gaussian Elimination Other Problems Conclusion

  3. Problem: Heuristics that work in practice, with no sound theoretical explanation Exponential worst-case complexity, but works in practice Polynomial worst-case complexity, but much faster in practice Heuristic speeds up code, with poor results in worst-case

  4. Attempted resolution: Average-case analysis Measure expected performance on random inputs

  5. Random is not typical

  6. Critique of Average-case Analysis Random objects have very special properties with exponentially high probability Actual inputs might not look random.

  7. Smoothed Analysis: a hybrid of worst and average case worst case average case

  8. Smoothed Analysis: a hybrid of worst and average case is Gaussian of stand dev worst case average case smoothed complexity

  9. Smoothed Complexity Interpolates between worst and average case Considers neighborhood of every input If low, all bad inputs are unstable

  10. Complexity Landscape worst case run time average case input space

  11. Smoothed Complexity Landscape(convolved with Gaussian) run time smoothed complexity input space

  12. Classical Example: Simplex Method for Linear Programming max s.t. Worst-Case: exponential Average-Case: polynomial Widely used in practice

  13. The Diet Problem Min s.t.

  14. Classical Example: Simplex Method for Linear Programming max s.t. Worst-Case: exponential Average-Case: polynomial Widely used in practice

  15. The Simplex Method opt start

  16. max s.t. max s.t. Smoothed Analysis of Simplex Method G is Gaussian Theorem: For all A, b, c, simplex method takes expected time polynomialin

  17. Analysis of Simplex Method Using Shadow-Vertex Pivot Rule

  18. Shadow vertex pivot rule start objective

  19. The Polar of a Polytope

  20. Polar Form of Linear Programming c max  cÎ ConvexHull(a1, a2, ..., am)

  21. Shadow vertex pivot rule, in polar

  22. Count facets by discretizingto N directions, N∞

  23. Count pairs in different facets [ ] Different Facets < c/N Pr So, expect c Facets

  24. Unlikely cone has small angle

  25. Angle Distance

  26. Isolate on one Simplex

  27. max s.t. max s.t. Smoothed Analysis of Simplex Method G is Gaussian Theorem: For all A, b, c, simplex method takes expected time polynomialin

  28. Interior Point Methods for Linear Programming Analysis Method #Iterations Observation Worst-Case, upper Worst-Case, lower Average-Case Smoothed, upper ( ) [Dunagan-S-Teng], [S-Teng] Conjecture

  29. Gaussian Elimination for Ax = b >> A = randn(2) A = -0.4326 0.1253 -1.6656 0.2877 >> b = randn(2,1) b = -1.1465 1.1909 >> x = A \ b x = -5.6821 -28.7583 >> norm(A*x - b) ans = 8.0059e-016

  30. Gaussian Elimination for Ax = b >> A = 2*eye(70) - tril(ones(70)); >> A(:,70) = 1; >> b = randn(70,1); >> x = A \ b; >> norm(A*x - b) ans = 3.5340e+004 Failed! Perturb A >> Ap = A + randn(70) / 10^9; >> x = Ap \ b; >> norm(Ap*x - b) ans = 5.8950e-015

  31. >> A = 2*eye(70) - tril(ones(70)); >> A(:,70) = 1; >> b = randn(70,1); >> x = A \ b; >> norm(A*x - b) ans = 3.5340e+004 Failed! Perturb A >> Ap = A + randn(70) / 10^9; >> x = Ap \ b; >> norm(Ap*x - b) ans = 5.8950e-015 >> norm(A*x - b) ans = 3.6802e-008 Solved original too! Gaussian Elimination for Ax = b

  32. >> A = 2*eye(70) - tril(ones(70)); >> A(:,70) = 1; >> b = randn(70,1); >> x = A \ b; >> norm(A*x - b) ans = 3.5340e+004 Failed! Perturb A >> Ap = A + randn(70) / 10^9; >> x = Ap \ b; >> norm(Ap*x - b) ans = 5.8950e-015 >> norm(A*x - b) ans = 3.6802e-008 Solved original too! Gaussian Elimination for Ax = b

  33. >> A = 2*eye(70) - tril(ones(70)); >> A(:,70) = 1; >> b = randn(70,1); >> x = A \ b; >> norm(A*x - b) ans = 3.5340e+004 Failed! Perturb A >> Ap = A + randn(70) / 10^9; >> x = Ap \ b; >> norm(Ap*x - b) ans = 5.8950e-015 >> norm(A*x - b) ans = 3.6802e-008 Solved original too! Gaussian Elimination for Ax = b

  34. Gaussian Elimination with Partial Pivoting Fast heuristic for maintaining precision, by trying to keep entries small

  35. Gaussian Elimination with Partial Pivoting Fast heuristic for maintaining precision, by trying to keep entries small Pivot not just on zeros, but to move up entry of largest magnitude

  36. Gaussian Elimination with Partial Pivoting “Gaussian elimination with partial pivoting is utterly stable in practice. In fifty years of computing, no matrix problems that excite an explosive instability are know to have arisen under natural circumstances … Matrices with large growth factors are vanishingly rare in applications.” Nick Trefethen

  37. Gaussian Elimination with Partial Pivoting “Gaussian elimination with partial pivoting is utterly stable in practice. In fifty years of computing, no matrix problems that excite an explosive instability are know to have arisen under natural circumstances … Matrices with large growth factors are vanishingly rare in applications.” Nick Trefethen Theorem: [Sankar-S-Teng]

  38. Mesh Generation Parallel complexity of Ruppert’s Delaunay refinement is O( (log n/s)2) Spielman-Teng-Üngör

  39. Other Smoothed Analyses Perceptron[Blum-Dunagan] Quicksort[Banderier-Beier-Mehlhorn] Parallel connectivity in digraphs [Frieze-Flaxman] Complex Gaussian Elimination [Yeung]Smoothed analysis of K(A) [Wschebor] On smoothed analysis in dense graphs and formulas [Krivelevich-Sudakov-Tetali]Smoothed Number of Extreme Points under Uniform Noise [Damerow-Sohler] Typical Properties of Winners and Losers in Discrete Optimization [Beier-Vöcking] Multi-Level Feedback scheduling [Becchetti-Leonardi-Marchetti-Shäfer-Vredeveld] Smoothed motion complexity [Damerow, Meyer auf der Heide, Räcke, Scheideler, Sohler]

  40. Future Smoothed Analyses Multilevel graph partitioning Smoothed Analysis of Chaco and Metis Differential Evolution and other optimization heuristics Computing Nash Equilibria

  41. Future Smoothed Analyses Perturb less! Preserve zeros Preserve magnitudes of numbers Property-preserving perturbations More Discrete smoothed analyses New algorithms For more, see the Smoothed Analysis Homepage

More Related