290 likes | 397 Vues
Classical mathematics and new challenges. Theorems and Algorithms. L á szl ó Lov á sz Microsoft Research One Microsoft Way, Redmond, WA 98052 lovasz@microsoft.com. Geometric constructions. Euclidean algorithm. Newton’s method. Gaussian elimination.
E N D
Classical mathematics and new challenges Theorems and Algorithms László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052 lovasz@microsoft.com
Geometric constructions Euclidean algorithm Newton’s method Gaussian elimination Algorithmic vs. structural mathematics ancient and classical algorithms
Given find rational approximation and such that An example: diophantine approximation and continued fractions continued fraction expansion
recursive functions, Λ-calculus, Turing-machines Church, Turing, Post algorithmic and logical undecidability Church, Gödel A mini-history of algorithms 30’s:Mathematical notion of algorithms
sorting searching arithmetic … Travelling Salesman matching network flows factoring … 50’s, 60’s:Computers the significance of running time simple and complex problems
late 60’s-80’s:Complexity theory Time, space, information complexity Nondeterminism, good characteriztion, completeness Polynomial hierarchy Classification of many real-life problems into P vs. NP-complete Randomization, parallelism P=NP?
algorithms negative results topology algebraic geometry coding theory factoring volume computation semidefinite optimization 90’s: Increasing sophistication upper and lower bounds on complexity
Higlights of the 90’s: Approximation algorithms positive and negative results Probabilistic algorithms Markov chains, high concentration, nibble methods, phase transitions Pseudorandom number generators from art to science: theory and constructions
maximize Approximation algorithms: The Max Cut Problem NP-hard …Approximations?
Arora-Lund-Motwani- Sudan-Szegedy ’92 Hastad NP-hard with 6% error Polynomial with 12% error Goemans-Williamson ’93 Easy with 50% error Erdős~’65 ??? (Interactive proof systems, PCP) (semidefinite optimization)
Algorithms and probability Randomized algorithms (making coin flips): important applications (primality testing, integration, optimization, volume computation, simulation) difficult to analyze Algorithms with stochastic input: even more important applications even more difficult to analyze Difficulty: after a few iterations, complicated functions of the original random variables arise.
New methods in probability: Strong concentration (Talagrand) Laws of Large Numbers: sums of independent random variables is strongly concentrated General strong concentration: very general “smooth” functions of independent random variables are strongly concentrated Nibble, martingales, rapidly mixing Markov chains,…
O(q)? Want: such that: Few vectors - any 3 linearly independent - every vector is a linear combination of 2 Every finite projective plane of order q has a complete arc of size qpolylog(q). Kim-Vu Example (was open for 30 years)
at random Second idea: choose ????? Solution: Rödl nibble + strong concentration results First idea: use algebraic construction (conics,…) gives only about q
Driving forces for the next decade New areas of applications The study of very large structures More tools from classical areas in mathematics
New areas of application: interaction between discrete and continuous Biology:genetic code population dynamics protein folding Physics:elementary particles, quarks, etc. (Feynman graphs) statistical mechanics (graph theory, discrete probability) Economics:indivisibilities (integer programming, game theory) Computing:algorithms, complexity, databases, networks, VLSI, ...
Very large structures • internet • VLSI • databases How to model them? non-constant but stable partly random • genetic code • brain • animal • ecosystem • -economy • society
up to a bounded number of additional nodes except for “fringes” of bounded depth tree-decomposition embedable in a fixed surface Very large structures: how to model them? Graph minors Robertson, Seymour, Thomas If a graph does not contain a given minor, then it is essentially a 1-dimensional structure of essentially 2-dimensional pieces.
given >0 and k>1, # of parts is between k and f(k, ) difference at most 1 with k2 exceptions for subsets X,Y of the two parts, # of edges between X and Y is p|X||Y| n2 Very large structures: how to model them? Regularity Lemma Szeméredi 74 The nodes of graph can be partitioned into a bounded number of essentially equal parts so that almost all bipartite graphs between 2 parts are essentially random (with different densities).
Very large structures • -internet • VLSI • databases • genetic code • brain • animal • ecosystem • economy • society How to model them? How to handle them algorithmically? heuristics/approximation algorithms linear time algorithms sublinear time algorithms (sampling) A complexity theory of linear time?
by a membership oracle; , convex Given: Want: volume of K with relative error ε in n More and more tools from classical math Example: Volume computation Not possible in polynomial time, even if ε=ncn. Elekes, Bárány, Füredi Possible in randomized polynomial time, for arbitrarily small ε. Dyer, Frieze, Kannan
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies must be exponential in n * * * * * * * * *
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies by sampling by sampling …
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K K” K’ F Probability: use eigenvalue gap Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K Dyer Frieze Kannan 1989 Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair Enough to prove isoperimetric inequality for subsets of K Differential geometry: Isoperimetric inequality Probability: use eigenvalue gap
Statistics: Better error handling Dyer-Frieze 1993 Differential equations: bounds on Poincaré constant Paine-Weinberger bisection method, improved isoperimetric inequality LL-Simonovits 1990 Optimization: Better prepocessing LL-Simonovits 1995 Functional analysis: isotropic position of convex bodies achieving isotropic position Kannan-LL-Simonovits 1998 Log-concave functions: reduction to integration Applegate-Kannan 1992 Convex geometry: Ball walk LL 1992
Geometry: projective (Hilbert) distance affine invariant isoperimetric inequality analysis of hit-and-run walk LL 1999 Differential equations: log-Sobolev inequality elimination of “start penalty” for lattice walk Frieze-Kannan 1999 log-Cheeger inequality elimination of “start penalty” for ball walk Kannan-LL 1999 Scientific computing: non-reversible chains mix better; lifting Diaconis-Holmes-Neal Feng-LL-Pak walk with inertia Aspnes-Kannan-LL
More and more tools from classical math Linear algebra : eigenvalues semidefinite optimization higher incidence matrices homology theory Geometry : geometric representations convexity Analysis: generating functions Fourier analysis, quantum computing Number theory: cryptography Topology, group theory, algebraic geometry, special functions, differential equations,…