1 / 59

CS 691 Computational Photography

CS 691 Computational Photography. Instructor: Gianfranco Doretto Cutting Images. This Lecture: Finding Seams and Boundaries. Segmentation. This Lecture: Finding Seams and Boundaries. Retargetting. http://swieskowski.net/carve/. This Lecture: Finding Seams and Boundaries. Stitching.

cale
Télécharger la présentation

CS 691 Computational Photography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 691 Computational Photography Instructor: Gianfranco Doretto Cutting Images

  2. This Lecture: Finding Seams and Boundaries Segmentation

  3. This Lecture: Finding Seams and Boundaries Retargetting http://swieskowski.net/carve/

  4. This Lecture: Finding Seams and Boundaries Stitching

  5. This Lecture: Finding seams and boundaries Fundamental Concept: The Image as a Graph • Intelligent Scissors: Good boundary = short path • Graph cuts: Good region has low cutting cost

  6. Semi-automated segmentation User provides imprecise and incomplete specification of region – your algorithm has to read his/her mind. Key problems What groups of pixels form cohesive regions? What pixels are likely to be on the boundary of regions? Which region is the user trying to select?

  7. What makes a good region? • Contains small range of color/texture • Looks different than background • Compact

  8. What makes a good boundary? • High gradient along boundary • Gradient in right direction • Smooth

  9. The Image as a Graph Node: pixel Edge: cost of path or cut between two pixels

  10. Intelligent Scissors Mortenson and Barrett (SIGGRAPH 1995)

  11. Intelligent Scissors • Formulation: find good boundary between seed points • Challenges • Minimize interaction time • Define what makes a good boundary • Efficiently find it

  12. Intelligent Scissors Mortenson and Barrett (SIGGRAPH 1995) A good image boundary has a short path through the graph. 1 2 1 Start 2 1 9 1 5 4 4 3 3 3 6 1 1 1 End

  13. Intelligent Scissors: method • Define boundary cost between neighboring pixels • User specifies a starting point (seed) • Compute lowest cost from seed to each other pixel • Get new seed, get path between seeds, repeat

  14. Intelligent Scissors: method • Define boundary cost between neighboring pixels • Lower if edge is present (e.g., with edge(im, ‘canny’)) • Lower if gradient is strong • Lower if gradient is in direction of boundary

  15. Gradients, Edges, and Path Cost Gradient Magnitude Path Cost Edge Image

  16. Intelligent Scissors: method • Define boundary cost between neighboring pixels • User specifies a starting point (seed) • Snapping

  17. Intelligent Scissors: method • Define boundary cost between neighboring pixels • User specifies a starting point (seed) • Compute lowest cost from seed to each other pixel • Djikstra’s shortest path algorithm

  18. Djikstra’s shortest path algorithm Initialize, given seed s: • Compute cost2(q, r) % cost for boundary from pixel q to neighboring pixel r • cost(s) = 0 % total cost from seed to this point • A = {s} % set to be expanded • E = { } % set of expanded pixels • P(q) % pointer to pixel that leads to q Loop while A is not empty 1. q = pixel in A with lowest cost • for each pixel r in neighborhood of q that is not in E • cost_tmp = cost(q) + cost2(q,r) • if (r is not in A) OR (cost_tmp < cost(r)) • cost(r) = cost_tmp • P(r) = q • Add r to A

  19. Intelligent Scissors: method • Define boundary cost between neighboring pixels • User specifies a starting point (seed) • Compute lowest cost from seed to each other pixel • Get new seed, get path between seeds, repeat

  20. Intelligent Scissors: improving interaction • Snap when placing first seed • Automatically adjust as user drags • Freeze stable boundary points to make new seeds

  21. Where will intelligent scissors work well, or have problems?

  22. Grab cuts and graph cuts Magic Wand(198?) Intelligent ScissorsMortensen and Barrett (1995) GrabCut User Input Result Regions Regions & Boundary Boundary Source: Rother

  23. Segmentation with graph cuts Source (Label 0) Cost to assign to 0 Cost to split nodes Cost to assign to 1 Sink (Label 1)

  24. Segmentation with graph cuts Source (Label 0) Cost to assign to 0 Cost to split nodes Cost to assign to 1 Sink (Label 1)

  25. Foreground (source) Min Cut Background(sink) Cut:separatingsource and sink; Energy: collection of edges Min Cut: Global minimal enegry in polynomial time Interactive Graph Cuts [Boykov, Jolly ICCV’01] Image constraints

  26. GrabCut Colour Model Gaussian Mixture Model (typically 5-8 components) R R Iterated graph cut Foreground &Background Foreground G Background G Background Source: Rother

  27. Graph cuts segmentation • Define graph • usually 4-connected or 8-connected • Set weights to foreground/background • Color histogram or mixture of Gaussians for background and foreground • Set weights for edges between pixels • Apply min-cut/max-flow algorithm • Return to 2, using current labels to compute foreground, background models

  28. What is easy or hard about these cases for graphcut-based segmentation?

  29. GrabCut – Interactive Foreground Extraction10 Easier examples

  30. GrabCut – Interactive Foreground Extraction11 More difficult Examples Camouflage & Low Contrast Fine structure Harder Case Initial Rectangle InitialResult

  31. Lazy Snapping (Li et al. SG 2004)

  32. Limitations of Graph Cuts • Requires associative graphs • Connected nodes should prefer to have the same label • Is optimal only for binary problems

  33. Other applications: Seam Carving Seam Carving – Avidan and Shamir (2007) Demo: http://swieskowski.net/carve/

  34. Other applications: Seam Carving • Find shortest path from top to bottom (or left to right), where cost = gradient magnitude http://www.youtube.com/watch?v=6NcIJXTlugc Seam Carving – Avidan and Shamir (2007) Demo: http://swieskowski.net/carve/

  35. Dynamic Programming • Well known algorithm design techniques:. • Divide-and-conquer algorithms • Another strategy for designing algorithms is dynamic programming. • Used when problem breaks down into recurring small subproblems • Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solutionwith the optimal value.

  36. Dynamic Programming • Dynamic programming is a way of improving on inefficientdivide-and-conquer algorithms. • By “inefficient”, we mean that the same recursive call is made over and over. • If same subproblemis solved several times, we can use tableto store result of a subproblem the first time it is computed and thus never have to recompute it again. • Dynamic programming is applicable when the subproblems are dependent, that is, when subproblems share subsubproblems. • “Programming” refers to a tabular method

  37. Elements of Dynamic Programming (DP) DP is used to solve problems with the following characteristics: • Simple subproblems • We should be able to break the original problem to smaller subproblems that have the same structure • Optimal substructure of the problems • The optimal solutionto the problem contains within optimal solutions to its subproblems. • Overlapping sub-problems • there exist some places where we solve the same subproblemmore than once.

  38. Steps to Designing a Dynamic Programming Algorithm • Characterize optimal substructure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution

  39. Example: Matrix-chain Multiplication • Suppose we have a sequence or chain A1, A2, …, An of n matrices to be multiplied • That is, we want to compute the product A1A2…An • There are many possible ways (parenthesizations) to compute the product

  40. Matrix-chain Multiplication …contd • Example: consider the chain A1, A2, A3, A4 of 4 matrices • Let us compute the product A1A2A3A4 • There are 5 possible ways: • (A1(A2(A3A4))) • (A1((A2A3)A4)) • ((A1A2)(A3A4)) • ((A1(A2A3))A4) • (((A1A2)A3)A4)

  41. Matrix-chain Multiplication …contd • To compute the number of scalar multiplications necessary, we must know: • Algorithm to multiply two matrices • Matrix dimensions • Can you write the algorithm to multiply two matrices?

  42. Algorithm to Multiply 2 Matrices Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r) Result: Matrix Cp×r resulting from the product A·B MATRIX-MULTIPLY(Ap×q , Bq×r) 1. for i ← 1 top 2. for j ← 1 tor 3. C[i, j]← 0 4. for k ← 1 toq 5. C[i, j]← C[i, j] + A[i, k]· B[k, j] 6. returnC Scalar multiplication in line 5 dominates time to compute CNumber of scalar multiplications = pqr

  43. Matrix-chain Multiplication …contd • Example: Consider three matrices A10x100, B100x5, and C5x50 • There are 2 ways to parenthesize • ((AB)C) = D10x5·C5x50 • AB 10·100·5=5,000 scalar multiplications • DC 10·5·50 =2,500 scalar multiplications • (A(BC)) = A10100·E10050 • BC100·5·50=25,000 scalar multiplications • AE10·100·50 =50,000 scalar multiplications Total: 7,500 Total: 75,000

  44. Matrix-chain Multiplication …contd • Matrix-chain multiplication problem • Given a chain A1, A2, …, An of n matrices, where for i=1, 2, …, n, matrix Ai has dimension pi-1xpi • Parenthesize the product A1A2…An such that the total number of scalar multiplications is minimized • Brute force method of exhaustive search takes time exponential in n

  45. Dynamic Programming Approach • The structure of an optimal solution • Let us use the notation Ai..j for the matrix that results from the product Ai Ai+1 … Aj • An optimal parenthesization of the product A1A2…An splits the product between Akand Ak+1for some integer k where1 ≤ k < n • First compute matrices A1..k and Ak+1..n ; then multiply them to get the final matrix A1..n

  46. Dynamic Programming Approach …contd • Key observation: parenthesizations of the subchains A1A2…Ak and Ak+1Ak+2…An must also be optimal if the parenthesization of the chain A1A2…An is optimal (why?) • That is, the optimal solution to the problem contains within it the optimal solution to subproblems

  47. Dynamic Programming Approach …contd • Recursive definition of the value of an optimal solution • Let m[i, j] be the minimum number of scalar multiplications necessary to compute Ai..j • Minimum cost to compute A1..n is m[1, n] • Suppose the optimal parenthesization of Ai..jsplits the product between Akand Ak+1for some integer k where i ≤ k < j

  48. Dynamic Programming Approach …contd • Ai..j= (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k· Ak+1..j • Cost of computing Ai..j = cost of computing Ai..k + cost of computing Ak+1..j + cost of multiplying Ai..k and Ak+1..j • Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj • m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j • m[i, i ] = 0 for i=1,2,…,n

  49. Dynamic Programming Approach …contd • But… optimal parenthesization occurs at one value of k among all possible i ≤ k < j • Check all these and select the best one 0 if i=j m[i, j ] = min {m[i, k] + m[k+1, j ] + pi-1pk pj}if i<j i≤ k< j

More Related