1 / 50

3D from Pictures

3D from Pictures. Jiajun Zhu Sept.29 2006 University of Virginia. What can we compute from a collection of pictures?. - 3D structure - camera poses and parameters. One of the most important / exciting results in computer vision from 90s’.

bambi
Télécharger la présentation

3D from Pictures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3D from Pictures Jiajun Zhu Sept.29 2006 University of Virginia

  2. What can we compute from a collection of pictures?

  3. - 3D structure- camera poses and parameters

  4. One of the most important / exciting results in computer vision from 90s’ It is difficult, largely due to numerical computation in practice.

  5. But this is SO powerful!!!2 SIGGRAPH papers with several sketches this year!show a few demo videos

  6. Now let’s see how this works! Input: (1) A collection of pictures.Output: (1) camera parameters (2) sparse 3D scene structure

  7. Consider 1 camera first What’s the relation between pixels and rays in space?

  8. ~

  9. Simplified projective camera model P P is a 3x4 Matrix 7 degree of freedom: 1 from focal length 3 from rotation 3 from translation x = P X = K [ R | t ] X

  10. Consider 1 camera x = P X P3x4 has 7 degrees of freedom # unknown = 7 Each X gives 2 equations Given one image, we observe x 2n >= 7 i.e. n >= 4 Can we recover X or P? If P is known, what do we know about X? If X is known, can we recover P?

  11. This is a Camera Calibration Problem Input:n>4 world to image point correspondences {Xi xi}Output:camera parameters P = K[R|T]

  12. 0 -w y w 0 –x -y x 0 where [Xi]x = Direct Linear Transform (DLT)

  13. Direct Linear Transform (DLT) n  4 points minimizesubjecttoconstraint use SVD p is the last column vector of V: p = Vn

  14. ~ ~ ~ Implementation in Practice • Objective • Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P • Algorithm • Linear solution: • Normalization: • DLT • Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): • Denormalization:

  15. ~ ~ Camera centre C is the point for which PC = 0 i.e. the right null vector of P ~ ~ P = K[R|t] = K[R|-RC] = KR[I|-C] ~ write M = KR, then P = M[I|- C] How to recover K, R and t from P? Objective Given camera projection matrix P, decompose P = K[R|t] Algorithm Perform RQ decomposition of M, so that K is the upper-triangular matrix and R is orthonormal matrix.

  16. This is what we learn from 1 Camera

  17. Let’s consider 2 cameras • Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? • (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views?

  18. Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image?

  19. The Fundamental Matrix F x’T Fx = 0

  20. What does Fundamental Matrix F tell us? Fundamental matrix F relates corresponding pixels x’T Fx = 0 If the intrinsic parameter (i.e. focal length in our camera model) of both cameras are known, as K and K’. Then we can derive (not here) that: K’TFK = t cross product R t and R are translation and rotation for the 2nd camera i.e. P = [I|0] and P’ = [R|t]

  21. Good thing is that … Fundamental matrix F can be computed: x’T Fx = 0 from a set of pixel correspondences: {x’  x}

  22. Compute F from correspondence: separate known from unknown (data) (unknowns) (linear) How many correspondences do we need?

  23. What can we do now? (1) Given F,K and K’, we can estimate the relative translation and rotation for two cameras: P = [I | 0] and P’ = [R | t] (2) Given 8 correspondences: {x’  x}, we can compute F • Given K and K’, and 8 correspondences {x’  x}, • we can compute: P = [I | 0] and P’ = [R | t]

  24. This answers the 2nd question • Correspondence geometry: Given an image point x in the first image, how does this constrain the position of the corresponding point x’ in the second image? • (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views?

  25. But how to make this automatic? • Given K and K’, and 8 correspondences {x’  x}, • we can compute: P = [I | 0] and P’ = [R | t] (1) Estimating intrinsic K and K’ (auto-calibration) will not be discussed here. (involve much projective geometry knowledge) (2) Let’s see how to find correspondences automatically. (i.e. Feature detection and matching)

  26. Lowe’s SIFT features invariant to with position, orientation and scale

  27. Scale • Look for strong responses of DOG filter (Difference-Of-Gaussian) over scale space • Only consider local maxima in both position and scale

  28. Orientation • Create histogram of local gradient directions computed at selected scale • Assign canonical orientation at peak of smoothed histogram • Each key specifies stable 2D coordinates (x, y, scale, orientation)

  29. Simple matching For each feature in image 1 find the feature in image 2 that is most similar (compute correlation of two vectors) and vice-versa Keep mutual best matches Can design a very robust RANSAC type algorithm

  30. What have we learnt so far?

  31. What have we learnt so far?

  32. Consider more then 2 cameras X P’’ K P K’ P’

  33. Objective Given N images { Q1, …, QN } with reasonable overlaps Compute N camera projection matrices { P1, …, PN }, where each Pi = Ki[Ri |ti], Ki is the intrinsic parameter, Ri and ti are rotation and translation matrix respectively

  34. Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated.

  35. Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. (iii) incrementally add new camera Pk into the system, estimate its camera matrix by DLT (calibration) (iv) repeat (iii) until all the cameras are estimated. However, this won’t work!

  36. Algorithm (1) Find M tracks T = {T1, T2, …, TN } (i ) for every pair of image {Qi , Qj}: detect SIFT feature points in Qi and Qj match feature points robustly (RANSAC) (ii) match features across multiple images, construct tracks. (2) Estimate { P1… PN } and 3D position for each track { X1… XN } (i ) select one pair of image {Q1’ , Q2’} (well-conditioned). Let T1’2’ = {their associate overlapping track}; (ii) Estimate K1’ and K2’, compute {P1’ , P2’} and 3D position of T1’2’ from fundamental matrix. Then non-linearly minimize reprojection errors (LM). (iii) incrementally add new camera Pk into the system, estimate initial value by DLT, then non-linearly optimize the system. (iv) repeat (iii) until all the cameras are estimated. Replaces with more robust non-linear optimization

  37. Tired?

  38. ~ ~ ~ Recall the camera calibration algorithm • Objective • Given n≥4, 3D to 2D point correspondences {Xi↔xi’}, determine P • Algorithm • Linear solution: • Normalization: • DLT • Minimization of geometric error: Iteratively optimization (Levenberg-Marquardt): • Denormalization:

  39. What’s the contribution of this paper? We are lucky! 1st timehuge amount of visual data is easily accessible. High-level description of these data also become available. How do we explore them? Analysis them? Wisely use them? How to extract high-level information? - Computer Vision, Machine Learning Tools.Structure from motion, and more computer vision tools reach a certain robust point for graphics application. - InternetImage search - Human Labelgame with purpose

  40. What is the space of all the pictures? in the past present the future?

  41. What’s the space of all the videos? in the past present the future?

  42. What else?

  43. Using Search Engine?

  44. Using human computation power?

  45. Using human computation power?

  46. Using human computation power?

  47. What else?

  48. What else?

  49. Book:“Multiple View Geometry in Computer Vision” Hartley and ZissermanOnline Tutorial:http://www.cs.unc.edu/~marc/tutorial.pdfhttp://www.cs.unc.edu/~marc/tutorial/Matlab Toolbox:http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TORR1/index.html

More Related