1 / 25

Dynamic Programming

Dynamic Programming. Mani Chandy mani@cs.caltech.edu. The Pattern. Given a problem P, obtain a sequence of problems Q 0 , Q 1 , …., Q m , where: You have a solution to Q 0 The solution to P can be obtained from the solution to Q m ,

jshelley
Télécharger la présentation

Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming Mani Chandy mani@cs.caltech.edu

  2. The Pattern • Given a problem P, obtain a sequence of problems Q0, Q1, …., Qm, where: • You have a solution to Q0 • The solution to P can be obtained from the solution to Qm, • The solution to a problem Qj, j > 0, can be obtained from solutions to problems Qk, k < j, that appear earlier in the sequence.

  3. Propose a partial ordering of problems You can compute the solution to P from the solution to Qm Qm Q0 You know how to compute solution to Q0 Dynamic Progamming Pattern Given problem P P

  4. Creative Step Finding problems Qi from problem P More mechanical step: Determining the function that computes the solution Sk for problem Qk from the solutions Sj of problems Qj for j < k.

  5. 1 X N N X N N X N N X 1 Example: Matrix Multiplication What is the cost of multiplying matrices of these sizes?

  6. Cost of multiplying 2 matrices p x q q x r p rows and q columns. Cost is 2pqr because resultant matrix has pr elements, and The cost of computing each element is 2q operations.

  7. 1 X N N X N N X N N X 1 Parenthesization If we multiply these matrices first the cost is 2N3. N X N Resulting matrix

  8. Parenthesization N X N 1 X N N X 1 Cost of multiplication is N2. Thus, total cost is proportional to N3 + N2 + N if we parenthesize the expression in this way.

  9. 1 X N N X N N X N N X 1 Different Ordering Cost is proportional to N2

  10. 1 X N N X N N X N N X 1 1 X N N X N N X N N X 1 The Ordering Matters! One ordering costs O(N3) The other ordering costs O(N2)

  11. Generalization: Parenthesization op ( ) ( A1 ( A2 op A3 ) ) op …. op An Associative operation Cost of operation depends on parameters of the operands. Parenthesize to minimize total cost.

  12. Propose a partial ordering of problems Qm Q0 Creative Step Come up with partial-ordering of problems Qi given problem P.

  13. Creative Step: Solution Qi,j is: optimally parenthesize the expression Ai op ….. op Aj • Relatively “mechanical” steps: • Find partial ordering of problems Qi,j • Find function f that computes solution Si,j from solutions of • problems earlier in the ordering.

  14. Solution to given problem obtained from solution to this problem Q1,n. Solutions known Partial Ordering Structure Q1,4 Q1,3 Q2,4 Depends on Q1,2 Q2,3 Q3,4 Q3,3 Q4,4 Q1,1 Q2,2

  15. The Recurrence Relation Let C[j,k] be the minimum cost of executing Aj op … op Ak. C[j,j] = 0 Base Case: ???? Induction Step: ???? C[j,k] for k > j is: min over all v of C[j,v]+C[v+1,k] + cost of the operation combining [j…v] and [v+1 … k] Proof: ???

  16. For matrix multiplication Let j-th matrix have size: p(j-1) X pj Then the size of matrix obtained by combining [ j … v] is: ? p (j-1) X pv Then the size of matrix obtained by combining [ v+1 … k] is: ? pv X pk Cost of multiplying [j … v] and [v+1 … k] is p (j-1) X pv X pk

  17. Proof Structure What is the theorem that we are proving? We make an assertion about the meaning of a term. For instance, C[j,k] is the minimum cost of executing Aj op …. op Ak We are proving that this assertion is correct.

  18. Proof Structure Almost always, we use induction. Base case: establish that the value of C[j,j] is correct. Induction step: Assume that the value of C[j, j+u] is correct for all u where u is less than V, and prove that the value of C[j, j+V] is correct. Remember what we are proving: C[j,k] is the minimum cost of executing Aj op …. op Ak

  19. The Central Idea Bellman’s optimality principle Qa,z The discarded solutions for the smaller problem remain discarded because the optimal solution dominates them. Qi,j Qu,v Pick optimal Discard others

  20. All-Points Shortest Path • Given a weighted directed graph. • The edge-weight W[j,k] represents the distance from vertex j • to vertex k. • There are no cycles of negative weight. • For all j, k, compute D[j,k] where D[j,k] is the length of the • shortest path from vertex j to vertex k.

  21. The Creative Step Come up with partial-ordering of problems Qi given problem P. There are different problem sets Qi some better than others.

  22. Creative Step Let F[j,k,m] be the length of the shortest path from vertex j to vertex k that has at most m hops. What is the partial-ordering of problems Q[j,k,m]?

  23. A recurrence relation F[j,k,m] = min over all r of F[j,r,m-1] + W[r,k] Base case: F[j,k,1] = ????? W[j,k] (assume W[j,j] = 0 for all j) Obtaining solution for given problem P D[j,k] = F[j,k,n-1]

  24. Proof of Correctness What are we proving? We are proving that the meaning we gave to F[j,k,m] is correct Base Case We show that F[j,k,1] is indeed the length of the shortest path from vertex j to vertex k that traverses at most one edge. Induction Step Assume that F[j,k,m] is the length of the shortest path from j to k that traverses at most m edges, for all m less than p, and prove that F[j,k,p] is the min length from j to k that traverses at most p edges

  25. Complexity? n4 Can you do better? Come up with partial-ordering of problems Qi given problem P. Let F[j,k,m] be the length of the shortest path from vertex j to vertex k that has at most m hops. 2m

More Related