130 likes | 238 Vues
Explore Nesterov's method for convex space optimization and density distribution in analytic placement algorithms. Learn about nonlinear programming techniques and convex optimization in routing congestion. Discover the applications of Nesterov's method and mass transportation in optimizing placement algorithms.
E N D
Analytic Placement Algorithms Chung-Kuan Cheng CSE Department, UC San Diego, CA 92130 Contact: ckcheng@ucsd.edu
Outline • Introduction • Nesterov’s Method for Convex Space • Density Distribution • Remarks
Introduction • Analytic Placement • Obj: length + density distr + timing + routing congestion • Nonlinear Programming Algorithms • Convex Space • Density Distribution • Mass transportation
Convex Optimization: min f(X) • Newton’s Method: Second ordered method • Find F(X)= df(X)/dX= 0 • Xk= Xk-1 - dF(X)/dX|X=Xk-1-1 F(Xk-1) • Krylov Space Method: First ordered method • Gradient Descent • Conjugate Gradient • Nesterov’s Method
Introduction Global Rate of Convergence: Let k be the number of iterations. • Newton method: : O(L/k2)-O(L/k3) • Gradient method: O(L/k) • Quasi-Newton or conjugate gradient: Not better or even worse. (Y.L. Yu, Alberta) • Nesterov’s method: O(L/k2), the order is the optimum for first order approaches.
Introduction • Nesterov: Three gradient projection methods published in 1983, 1988, 2005. • Beck & Teboulle: FISTA, a proximal gradient version in 2008. • Nesterov: basic book in 2004. • Tseng: overview and unified analysis in 2008.
Nesterov’s Method Minimize f(X) under certain constraints, where f(X) and constraints are convex functions satisfying Lipshitz condition. • Convex function • f(X)>= f(Y)+ grad f(Y)(X-Y) • Lipshitz condition: there exists a constant a • |grad f(X) - grad f(Y)| <= a|X-Y| • Definition • L(X,Y)= f(Y)+ grad f(Y)(X-Y) + 0.5a |X-Y|2 • P(Y)= min X { L(X,Y), X is feasible}
Nesterov’s Method: definitions • Set QL(Y)= Y-1/a grad f(Y) • L(QL(Y),Y)=f(Y)-0.5a |QL(Y)-Y|2 =f(Y)-0.5/a|grad f(Y)|2 • Lemma: f(QL(Y))-f(Z) >= 0.5a {|Z-Y|2-|Z-QL(Y)|2}
Nesterov’s Method: Algorithm Initial: Y1=X0, t1= 1 Step (k>0) • Xk=P(Yk) • tk+1= ½{1+(1+4tk2)½} • Yk+1=Xk+(tk-1)/tk+1 (Xk –Xk-1) Lemma: tk>= 0.5 (k+1) Theorem: f(Xk)-f(X*)<= 2a |X0-X*|2/(k+1)2
Density Distribution Mass transport formulation: Given a map and its mass density, transport the mass evenly to the whole map • Min sum_i |xi-yi|b • Constraint: new mass density is a constant xi location of mass i yi new location of mass i
Density Distribution: Algorithm • Linear assignment: High complexity • Min cost flow: Linear cost • Algorithm: • Input: mass density with mass locations xi: D(X) • Derive 2D Fourier transform, D(w), of the mass • Do inverse transform on -jwD(w) which is the force to move to the new locations. The solution is: f(X)= grad -D(X). • Property: curl f(X)= 0.
Summary • Nesterov’s method has been successfully applied to different fields, e.g. compressed sensing. No report on the placement yet. • Mass transport is heavily studied in image processing. The gradient can be derived from Fourier transform.