1 / 120

Combining Linear Programming Based Decomposition Techniques with Constraint Programming

Menkes van den Briel Member of Research Staff NICTA and ANU menkes@nicta.com.au. Combining Linear Programming Based Decomposition Techniques with Constraint Programming. CP-based column generation. CP-based column generation. CP-based Benders decomposition. CP versus IP. Global Optimal.

holt
Télécharger la présentation

Combining Linear Programming Based Decomposition Techniques with Constraint Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Menkes van den Briel Member of Research Staff NICTA and ANU menkes@nicta.com.au Combining Linear Programming Based Decomposition Techniques with Constraint Programming

  2. CP-based column generation

  3. CP-based column generation

  4. CP-based Benders decomposition

  5. CP versus IP GlobalOptimal LocalFeasible

  6. CP versus IP • “MILP is very efficient when the relaxation is tight and models have a structure that can be effectively exploited” • “CP works better for highly constrained discrete optimization problems where expressiveness of MILP is a major limitation” • “From the work that has been performed, it is not clear whether a general integration strategy will always perform better than either CP or an MILP approach by itself. This is especially true for the cases where one of these methods is a very good tool to solve the problem at hand. However, it is usually possible to enhance the performance of one approach by borrowing some ideas from the other” • Source: Jain and Grossmann, 2001

  7. Outline • Background • Introduction • Dantzig Wolfe decomposition • Benders decomposition • Conclusions

  8. What is your background? • Have implementedBenders and/or Dantzig Wolfe decomposition • Have heard about Benders and/or Dantzig Wolfe decomposition • Have seen Bender and/or Dances with Wolves

  9. Things to take away • A better understanding of how to combine linear programming based decomposition techniques with constraint programming • A better understanding of column generation, Dantzig Wolfe decomposition and Benders decomposition • A whole lot of Python code with example implementations

  10. Helpful installations • Python 2.6.x or 2.7.x • “Python is a programming language that lets you work more quickly and integrate your systems more effectively” • http://www.python.org/getit/ • Gurobi (Python interface) • “The state-of-the-art solver for linear programming (LP), quadratic and quadratically constrained programming (QP and QCP), and mixed-integer programming (MILP, MIQP, and MIQCP)” • http://www.gurobi.com/products/gurobi-optimizer/try-for-yourself • NetworkX • “NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks” • http://networkx.lanl.gov/download.html

  11. Abbreviations • Artificial Intelligence (AI) • Constraint Programming (CP) • Constraint Satisfaction Problem (CSP) • Integer Programming (IP) • Linear Programming (LP) • Mixed Integer Programming (MIP) • Mixed Integer Linear Programming (MILP) • Mathematical Programming (MP) • Operations Research (OR)

  12. Outline • Background • Introduction • Dantzig Wolfe decomposition • Benders decomposition • Conclusions

  13. What is decomposition? • “Decomposition in computer science, also known as factoring, refers to the process by which a complex problem or system is broken down into parts that are easier to conceive, understand, program, and maintain” • Source: http://en.wikipedia.org/wiki/Decomposition_(computer_science) • Decomposition in linear programmingis a technique for solving linear programming problems where the constraints (or variables) of the problem can be divided into two groups, one group of “easy” constraints and another of “hard” constraints

  14. “easy” versus “hard” constraints • Referring to the constraints as “easy” and “hard” may be a bit misleading • The “hard” constraints need not be very difficult in themselves, but they can complicatethe linear program making the overall problem more difficult to solve • When the “hard” constraints are removed from the problem, then more efficient techniques could be applied to solve the resulting linear program

  15. Example G = (N, A), source s, sink t • Shortest path problem(P) Min (i,j)Acijxijs.t.1 for i = s Source j:(i,j)Axij – j:(j,i)Axji = 0 for iN– {s, t} Flow -1 for i = t Sinkxij  {0, 1} • Resource constrained shortest path problem(NP-complete)Min (i,j)Acijxijs.t.1 for i = s Source j:(i,j)Axij – j:(j,i)Axji = 0 for iN – {s, t} Flow -1 for i = t Sink(i,j)Adijxij ≤ C Capacityxij  {0, 1}

  16. Example m jobs, n machines • Assignment problem(P)Max i=1,…, m, j=1,…,ncijxijs.t. j=1,…,nxij = 1 for 1 ≤ i ≤ m Job i=1,…,mxij = 1 for 1 ≤ j ≤ n Machinexij  {0, 1} • Generalized assignment problem(NP-complete)Max i=1,…, m, j=1,…,ncijxijs.t. j=1,…,nxij = 1 for 1 ≤ i ≤ m Job i=1,…,mdijxij ≤ Cj for 1 ≤ j ≤ n Capacityxij  {0, 1}

  17. Example • Consider developing a strategic corporate plan for several production facilities. Each facility has its own capacity and production constraints, but decisions are linked together at the corporate level by budgetary considerations Common constraints Facility 1 Facility 2 Independentconstraints Facility n

  18. “easy” versus “hard” variables • Referring to the variables as “easy” and “hard” may be a bit misleading • The “hard” variables need not be very difficult in themselves, but they can complicatethe linear program making the overall problem more difficult to solve • When the “hard” variables are removed from the problem, then more efficient techniques could be applied to solve the resulting linear program

  19. Example • Capacitated facility location problem(NP-complete)Min i=1,…,n,j=1,…,mcijxij + j=1,…,mfjyjs.t. i=1,…,mxij≥ 1 for j = 1,…, nDemand j=1,…,ndixij ≤ Ciyifor i = 1,…, m Rollxij ≤ yifor i = 1,…, mj = 1,…, n Flow impl.xij≥ 0yi  {0, 1} m facilities, n customers

  20. Example • Consider solving a multi period scheduling problem. Each period has its own set of variables but is linked together through resource consumption variables Common variables Period 1 Period 2 Period n Independent variables

  21. Outline • Background • Introduction • Dantzig Wolfe decomposition • Benders decomposition • Conclusions

  22. Background • DualMax yTbs.t.yTA ≤ c [x]y ≥ 0 • PrimalMin cxs.t.Ax ≥ b [y]x ≥ 0

  23. Background • PrimalMin cxs.t.Ax ≥ b [y]x ≥ 0 • DualMax bTys.t.ATy ≤ cT [x]y ≥ 0 x y c cx bT bTy A Ax b AT ATy cT

  24. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 9 5 1

  25. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Cost 60.78 9 5 1

  26. Travelling salesman • Variablesxijis 1 if arc (i, j) is on the shortest tour, 0 otherwise • FormulationMin (i,j)Acijxijs.t.i:(i,j)Axij= 1 for j NInflowj:(i,j)Axij = 1 for i  N Outflowi,jS:(i,j)Axij≤ |S| – 1for S  NSubtourxij  {0, 1}

  27. Travelling salesman • Variablesxijis 1 if arc (i, j) is on the shortest tour, 0 otherwise • FormulationMin (i,j)Acijxijs.t. i:(i,j)Axij = 1 for j  NInflowj:(i,j)Axij = 1 for i  N Outflowxij {0, 1}

  28. Example code

  29. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Subtour0, 2, 7 9 5 1

  30. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Subtour0, 8, 1, 9 9 5 1

  31. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Subtour0, 8, 2, 7 9 5 1

  32. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Cost 79.98 9 5 1

  33. Travelling salesman • G = (N, A), cost cij 7 0 2 4 8 6 3 Cost 60.78 9 5 1

  34. LPs with many constraints • The number of constraints that are tight(or active) is at most equal to the number of variables, so even with many constraints (possibly exponential many) only a small subset will be tight in the optimal solution A Active Non-active

  35. Row generation in the primal… x c cx A Ax b

  36. … is column generation in the dual y bTy bT AT ATy cT

  37. …and vice versa x y bT bTy AT ATy cT cx c A Ax b Column generation in the primal Row generation in the dual =

  38. Resource constrained shortest path • G = (N, A), source s, sink t, for each (i, j)  A, cost cij, resource demand dij, and resource capacity C Capacity = 14 1,1 2 4 1,10 2,3 1,7 1 6 1,2 10,1 10,3 5,7 2,2 3 5 12,3 cij, dij i j Source: Desrosiers and Lübbecke, 2005

  39. Resource constrained shortest path • G = (N, A), source s, sink t, for each (i, j)  A, cost cij, resource demand dij, and resource capacity C Capacity = 14 1,1 2 4 1,10 2,3 1,7 1 6 1,2 10,1 10,3 5,7 2,2 Cost 13Demand 13 3 5 12,3 cij, dij i j

  40. Resource constrained shortest path • Variablesxijis 1 if arc(i, j) is on the shortest path, 0 otherwise • FormulationMin (i,j)Acijxijs.t.1 for i = s Sourcej:(i,j)Axij – j:(j,i)Axji= 0 for iN – {s, t} Flow -1 for i = t Sink(i,j)Adijxij≤ C Capacityxij  {0, 1}

  41. Example code

  42. Resource constrained shortest path • Variableskis 1 if pathk is the shortest path, 0 otherwise • FormulationMin kKckks.t.kKk = 1 ConvexkKdkk≤ C Capacityk ≥ 0

  43. Arc versus path • Arc variables • Path variables 2 4 2 4 1 1 6 6 3 5 3 5 2 4 2 4 1 1 6 6 3 5 3 5

  44. Example code

  45. Revised Simplex method • Min cxs.t.Ax ≥ bx ≥ 0 • Min z = cxs.t.Ax = bx ≥ 0 • Let x be a basic feasible solution, such that x= (xB, xN) where xB is the vector of basic variablesand xN is the vector of non-basic variables Add slack variables

  46. Revised Simplex method • Min z = cxs.t.Ax = bx ≥ 0 • Min z = cBxB + cNxNs.t.BxB + ANxN = bxB, xN ≥ 0 • Min z = cBxB + cNxNs.t.xB = B-1b – B-1ANxNxB, xN ≥ 0 x = (xB, xN), c= (cB, cN), A = (B, AN) Rearrange

  47. Revised Simplex method • Min z = cBxB + cNxNs.t.xB = B-1b – B-1ANxNxB, xN ≥ 0 • Min z = cBB-1b+ (cN– cBB-1AN)xNs.t.xB = B-1b – B-1ANxNxB, xN ≥ 0 Substitute

  48. Revised Simplex method • Min z = cBB-1b+ (cN– cBB-1AN)xNs.t.xB = B-1b – B-1ANxNxB, xN ≥ 0 • At the end of each iteration we have • Current value of non-basic variables xN= 0 • Current objective function value z = cBB-1b • Current value of basic variables xB = B-1b • Objective coefficients of basic variables 0 • Objective coefficients of non-basic variables (cN– cBB-1AN) are the so-called reduced costs • With a minimization objective we want non-basic variables with negative reduced costs

  49. Revised Simplex method • Simplex algorithm • Select new basic variable (xN to enter the basis) • Select new non-basic variable (xB to exit the basis) • Update data structures

  50. Revised Simplex method • Simplex algorithmxS = b (slack variables equal rhs)x\S = 0 (non-slack variables equal 0)while minj{(cj– cBB-1Aj)} < 0 • Select new basic variable j : (cj– cBB-1Aj) < 0 • Select new non-basic variable j’ by increasing xj as much as possible • Update data structures by swapping columns between matrix B and matrix AN

More Related