1 / 52

Oxford Reading Group

Oxford Reading Group. Phil Torr (Oxford Brookes). Outline of Lecture. Motivation: What problems may be solved by Semi Definite Programming SDP : segmentation, matching, classification. What is SDP? How can it be implemented. Linear Programming. Includes many problems Graph cut

gay
Télécharger la présentation

Oxford Reading Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Oxford Reading Group Phil Torr (Oxford Brookes)

  2. Outline of Lecture • Motivation: What problems may be solved by Semi Definite Programming SDP: segmentation, matching, classification. • What is SDP? • How can it be implemented.

  3. Linear Programming • Includes many problems • Graph cut • Segmentation • stereo • Shortest path • Tracking • Object recognition • Stereo • Some L1 regressions

  4. SDP • Has been described as the most exciting mathematical programming development in the 1990’s (Robert M Freund; MIT). • The simplex method etc from LP can be generalized to SDP.

  5. Classification by Ellipsoids • Rosen [95]. • Constrain A matrices to be PSD.

  6. Another Application:

  7. Recall Min Cut Problem s-t graph cut “source” “sink” T S Goal: divide the graph into two parts separating red and blue nodes A graph with two terminals S and T • Cut cost is a sum of severed edge weights • Minimum cost s-t cut can be found in polynomial time

  8. Segmentation - Model GRAB CUT 5 Hard Degmentation – Probablistic Framework • Input: Image consisting of pixels • Output: Segmentation of pixels • Color model • Coherence or Edge model Output Input

  9. Color Model GRAB CUT 7 Hard Degmentation – Probablistic Framework Assignment variable: Which mixture component does a pixel belong to?

  10. GraphCut for Inference GRAB CUT 21 Hard Degmentation – Probablistic Framework Source Foreground Cut Image Background Sink Cut:A collection of edges which separates the Source from the Sink MinCut:The cut with minimum weight (sum of edge weights) Solution:Global optimum (MinCut) in polynomial time

  11. GraphCut for Infernce GRAB CUT 22 Hard Degmentation – Probablistic Framework Source Foreground Cut Image Background constant Sink MinCut minimizes the energy of the MRF:

  12. MinCut • Edge weights must all be positive, • Then soluble in polynomial time, by max flow algorithm. • Weights are capacities therefore negative capacity does not make sense.

  13. MaxCut • Edge weights may be negative, • Note MaxCut and MinCut are same problem, however term MaxCut used when weights can be negative and positive. • MaxCut NP complete.

  14. Negative weights, MaxCut • Edge weights may be negative, • Problem is NP-hard • With SDP, an approximation ratio of 0.878 can be obtained! (Goemans-Williamson ’95), i.e. within 13% of the global energy minimum.

  15. Why negative weights • In example above the MinCut produces an odd segmentation, negative weights encode the idea of repulsive force that might yield a better segmentation.

  16. Pop out • Maybe need attractive and repulsive forces to get Gestalt effects:

  17. MaxCut Integer Program • Graph • Cut

  18. Laplacian • ∑ wij (1 - xi xj ) = x┬ (diag(W 1) - W) x • Laplacian of graph

  19. MaxCut Integer Program • Laplacian (semi positive definite) • Min/Max Cut minimizes this integer program (cf Hopfield network):

  20. Solving via relaxation • Problem above NP complete • Some NP complete problems can be approximated using a “relaxation”, e.g. from binary to continuous variables etc. • Next semi definite programming is explained and then it is shown what relaxation can be used to help solve MaxCut.

  21. Solving via relaxation • Keuchel et al 2002; suggest adding a constraint Where e = (1, … 1) to favour partitions with equal numbers of nodes.

  22. Solving via relaxation • The vector e corresponds to no cut so is an eigenvector of L with eigenvalue 0, a natural relaxation of the problem is to drop the integer constraint and solve for the second smallest eigenvector of L (the Fiedler vector, c.f. Shi & Malik) • Eigenvector may or may not be binary.

  23. Linear Programming (LP) Canonical form, inequalities removed by introduction of surplus variables.

  24. LP Example

  25. Semidefinite Programming If X and Ai are diagonal this is a linear program, <X,Y> also used for inner produce of two matrices.

  26. Positive Semidefinite Matrices W is Gram matrix, not weight

  27. Semi Definite Programming

  28. Semi Definite Programming • If X diagonal then reduces to LP • The feasible solution space of SDP is convex. • Polynomial time exact solution. • Note most non trivial applications of SDP are equivalent to minimizing the sum of the first few eigenvalues of X with respect to some linear constraints on X.

  29. Recall: • Min/Max Cut minimizes this integer program:

  30. SDP relaxation of MaxCut • The relaxation is to relax the rank one constraint for X; allowing X to be real valued. • Bad news, many more variables.

  31. Binary variables - 1 x x å i j Max w ij 2 Î - + s.t. x { 1 , 1 } i

  32. Graph Vertices: Unit Vectors - × 1 v v å i j Max w ij 2 Î = n s.t. v R , || v || 1 i i

  33. Algorithm • X is a continuous matrix, recovered by SDP. • Recover Gram matrix V, X = VVT by Cholesky decomposition matrix, each vertex represented by a vector vi on unit sphere. • Choose random hyperplane to bisect unit sphere and define cut.

  34. Recall × v v i j = × |v | |v| cos (a) i j Vertices on same side of cut nearby on unit sphere yields large dot product, vertices far apart yield small (negative) dot product.

  35. An SDP Relaxation of MAX CUT – Geometric intuition Embed the vertices of the graph on the unit spheresuch that vertices that are joined by edges are far apart.

  36. Random separation

  37. Algorithm Analysis The probability that two vectors are separated by a random hyperplane: vi vj

  38. Algorithm Analysis • Calculate expected value of cut. • Note value of relaxation exceeds cost of MaxCut (as it lives in a less constrained space). • Note the following identity: - 1 2 Cos ( x ) = 0.8785 6.. ratio min ³ p - 1 x - £ < 1 1 x

  39. Expected Value of Cut

  40. Classification by Ellipsoids • Rosen [95]. • Constrain A matrices to be PSD.

More Related