1 / 16

Tutorial 12 Linear programming Quadratic programming

Tutorial 12 Linear programming Quadratic programming. Linear case.

bayley
Télécharger la présentation

Tutorial 12 Linear programming Quadratic programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tutorial 12Linear programmingQuadratic programming

  2. Linear case We already discussed that the meaning of the constraints in the optimization is to define search region Ω within the space Rn of definition of f(x). Generally, each equality constraint reduced dimensionality by 1, and each inequality constraint defines a region within the space without dimensionality reduction. Now, consider the minimization of the linear function f(x): f(x) = cTx + b, and the search region Ω defined by constraints: M4CS 2005

  3. cTx + b Ω x* Linear case: Illustration The figure above illustrates, that in this linear case the minimum is reached on the boundary of the region Ω. We will leave the proof to you, and proceed to more general and a slightly harder case of convex functions. M4CS 2005

  4. Convexity Definition 1: The region Ω is called convex if Definition 2: The function f(x) is called convex if If ‘≥’ is replaced with ‘>’ the function will be called ‘strictly convex’ Note, that linear function is convex, but not strictly convex. M4CS 2005

  5. Convexity: Illustration Linear Strictly Convex Convex General The figure above illustrates the relations between linear, convex and strictly convex functions. M4CS 2005

  6. Linear programming: Definition Minimization of linear function within the convex region Ω is called a linear programming. Note, that with appropriate and sufficiently ‘tall’ matrix B (sufficiently many inequality conditions) arbitrary convex region can be approximated with arbitrarily large accuracy. (Can you prove it?). Many of the problems in linear programming have additional constraint of non-negative components of x: xi≥0. We will prove another, yet related, claim: A set of linear equality and inequality constraints define a convex region. M4CS 2005

  7. Quadratic programming: Definition The only difference between the quadratic programming and linear programming, is that the function can be a quadratic form: M4CS 2005

  8. Linear constraints define convex regions 1/3 • A region Ω, defined by a set of linear equality and inequality constraints • is convex. • Proof: • If Ω is an empty region, or contains a single point the definition for convexity is satisfied trivially. • Consider any . We need to prove that M4CS 2005

  9. Proof (equality constraints) 2/3 Assume the contrary - . This means that x3 violates at least one of the constraints. First, assume that it is one of the equality constraints: Note, that since , they satisfy this constraint: and . Applying the definition of x3 and linearity on scalar product, we obtain: Thus, x3 satisfies all the equality constraints, contrary to the assumption. M4CS 2005

  10. Proof (inequality constraints) 3/3 Now, assume that x3 violates an inequality constraint. This means, that . On the other hand , therefore Let us write , and Since , we obtain Thus, x3 satisfies all the inequality constraints, contrary to the assumption. We have proven, that for any x1, x2 satisfying linear equality and inequality constraints, and , also satisfies these constraints. Therefore, linear constraints define a convex region. M4CS 2005

  11. Example: Support Vector Machines 1/6 Given the set of labeled points , which is linearly separable, find the vector w which defines the hyperplanes separating the set with maximum margin. xTw= w M4CS 2005

  12. Example: Support Vector Machines 2/6 For the sake of elegant mathematical description, we define the data matrix A and the label matrix D as following: We are looking for the vector w, and appropriate constant  such that: Note, that these two cases can be combined: (1) M4CS 2005

  13. Example: Support Vector Machines 3/6 Note, that by multiplying w and  by some factor, we seemingly increase the separation between the planes in (1). Therefore, the best separation is has to maintain inequality (1) and simultaneously minimize the length of w: (2) This is a constrained minimization problem with quadratic function and linear inequality constraints. It is called quadratic programming. M4CS 2005

  14. Example: SVM, solution in Matlab 4/6 We will use the Matlab’s quadprog: >> help quadprog w=QUADPROG(H,f,A,b) attempts to solve the quadratic programming problem: min 0.5*w'*H*w + f'*w subject to: A*w <= b w In our case, H=I; f=0; For clarity, we bring the constraint (2) to the form compatible with matlab’s notation: M4CS 2005

  15. Example: SVM, solution in Matlab 5/6 w=QUADPROG(H,f,A,b) attempts to solve the quadratic programming problem: min 0.5*w'*H*w + f'*w subject to: A*w <= b w Thus, we have: M4CS 2005

  16. Example: SVM, solution in Matlab 6/6 n=2; PN=20; A = [rand(PN,2)+.1; -rand(PN,2) - .1] % the data D = diag([ones(1,PN),-ones(1,PN)]) % the labels plot(A(1:PN,1),A(1:PN,2),'g*'); % plot the data hold on; plot(A(PN+1:2*PN,1),A(PN+1:2*PN,2),'bo'); % adjust the input to quadprog() H = eye(n); f = zeros(n,1); AA = -D*A; b = -ones(PN*2,1); w = quadprog(H,f,AA,b) % quadratic programming – takes milliseconds % Plot the separating plane W_orth = [-.3:.01:.3]'*[w(2),-w(1)]; plot(W_orth(:,1), W_orth(:,2),'k.') M4CS 2005

More Related