1 / 86

Camera models and single view geometry

Camera models and single view geometry. Camera model. Camera: optical system. Y. d a=a -a. 2. 1. curvature radius. r. r. Z. 2. 1. thin lens. small angles:. Y. incident light beam. lens refraction index: n. deviated beam. Z. deviation angle ? Dq = q ’’ -q. Thin lens rules.

nakias
Télécharger la présentation

Camera models and single view geometry

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Camera models and single view geometry

  2. Camera model

  3. Camera: optical system Y da=a -a 2 1 curvature radius r r Z 2 1 thin lens small angles:

  4. Y incident light beam lens refraction index: n deviated beam Z deviation angle ? Dq =q’’-q

  5. Thin lens rules a) Y=0  Dq = 0 beams through lens center: undeviated b) f Dq = Y  independent of y parallel rays converge onto a focal plane Y f Dq

  6. Where do all rays starting from a scene point P converge ? f P Y O Z h r ? p Fresnel law Obs. For Z  ∞, r  f

  7. if d≠ r… image of a point = blurring circle image plane Z P f O a p F(blurring circle)=a (d-r)/r r focussed image: F(blurring circle) <image resolution d depth of field: range [Z1, Z2] where image is focussed

  8. r  f Hp: Z >> a the image of a point P belongs to the line (P,O) P image plane O p p = image of P = image plane ∩ line(O,P) interpretation line of p: line(O,p) = locus of the scene points projecting onto image point p

  9. r  f Hp: Z >> a the image of a point P belongs to the line (P,O) P image plane O p p = image of P = image plane ∩ line(O,P) interpretation line of p: line(O,p) = locus of the scene points projecting onto image point p

  10. P Y X f O c Z x y p perspective projection • nonlinear • not shape-preserving • not length-ratio preserving

  11. Homogeneous coordinates • In 2D: add a third coordinate, w • Point [x,y]T expanded to [u,v,w]T • Any two sets of points [u1,v1,w1]T and [u2,v2,w2]T represent the same point if one is multiple of the other • [u,v,w]T [x,y] with x=u/w, and y=v/w • [u,v,0]T is the point at the infinite along direction (u,v)

  12. Transformations translation by vector [dx,dy]T scaling (by different factors in x and y) rotation by angle q

  13. Homogeneous coordinates • In 3D: add a fourth coordinate, t • Point [X,Y,Z]T expanded to [x,y,z,t]T • Any two sets of points [x1,y1,z1,t1]T and [x2,y2,z2,t2]T represent the same point if one is multiple of the other • [x,y,z,t]T [X,Y,Z] with X=x/t, Y=y/t, and Z=z/t • [x,y,z,0]T is the point at the infinite along direction (x,y,z)

  14. Transformations translation scaling rotation Obs: rotation matrix is an orthogonal matrix = i.e.: R-1 RT

  15. Pinhole camera model

  16. Scene->Image mapping: perspective transformation with With “ad hoc” reference frames, for both image and scene

  17. Let us recall them Y X f O c Z x y • scene reference • centered on lens center • Z-axis orthogonal to image plane • X- and Y-axes opposite to • image x- and y-axes image reference - centered on principal point - x- and y-axes parallel to the sensor rows and columns - Euclidean reference

  18. Actual references are generic x y principal axis f O c principal point Y X image reference - centered on upper left corner - nonsquare pixels (aspect ratio)  noneuclidean reference Z • scene reference • not attached to the camera

  19. Principal point offset principal point

  20. CCD camera

  21. Scene-image relationship wrt actual reference frames image normally, s=0 scene

  22. scene-camera tranformation orthogonal (3D rotation) matrix extrinsic camera parameters K upper triangular: intrinsic camera parameters P: 10-11 degrees of freedom (10 if s=0)

  23. and with i.e.,defining x = [x, y, z]T

  24. Interpretation of o: u is image of x if i.e., if The locus ofthe pointsxwhose image isu is a straight line througho having direction is independent of u o is the camera viewpoint (perspective projection center) line(o, d) = Interpretation line of image point u

  25. Intrinsic and extrinsic parameters from P MK and R RQ-decomposition of a matrix: as the product between an orthogonal matrix and an upper triangular matrix M and m  t

  26. Camera anatomy Camera center Column points Principal plane Axis plane Principal point Principal ray

  27. Camera center null-space camera projection matrix For all A all points on AO project on image of A, therefore O is camera center Image of camera center is (0,0,0)T, i.e. undefined Finite cameras: Infinite cameras:

  28. Column vectors Image points corresponding to X,Y,Z directions and origin

  29. Row vectors Image of a point on the principal plane (the plane of the thin lens) is at the infinity  w = 0 is the principal plane

  30. note: p1,p2 dependent on image reparametrization is the plane through the u-axis is the plane through the v-axis similarly,

  31. principal point The principal point

  32. The principal axis vector vector defining front side of camera the direction of the normal to the principal plane

  33. (pseudo-inverse) Action of projective camera on point Forward projection Back-projection

  34. Depth of points (PO=0) (dot product) If , then m3 unit vector in positive direction

  35. =( )-1= -1 -1 R R Q Q Camera matrix decomposition Finding the camera center (use SVD to find null-space) Finding the camera orientation and internal parameters (use RQ decomposition ~QR) (if only QR, invert)

  36. When is skew non-zero? arctan(1/s) g 1 for CCD/CMOS, always s=0 Image from image, s≠0 possible (non coinciding principal axes) resulting camera:

  37. Euclidean vs. projective general projective interpretation Meaningfull decomposition in K,R,t requires Euclidean image and space Camera center is still valid in projective space Principal plane requires affine image and space Principal ray requires affine image and Euclidean space

  38. Camera calibration

  39. from scene-point to image point correspondence… …to projection matrix

  40. Basic equations

  41. Basic equations ctd. singular matrix with (12x1)

  42. minimize subject to constraint minimal solution P has 11 dof, 2 independent eq./points • 5½ correspondences needed (say 6) over-determined solution n  6 points p: eigenvector of ATA associated to its smallest eigenvalue

  43. Degenerate configurations • More complicate than 2D case (see Ch.21) • Camera and points on a twisted cubic • Points lie on plane or single line passing • through projection center

  44. Data normalization • translate origin to gravity center • (an)isotropic scaling

  45. from line correspondences Extend DLT to lines (back-project image line) (2 independent eq.)

  46. Geometric error

  47. Gold Standard algorithm • Objective • Given n≥6 2D to 2D point correspondences {Xi↔xi’}, determine the Maximum Likelyhood Estimation of P • Algorithm • Linear solution: • Normalization: • DLT: • Minimization of geometric error: using the linear estimate as a starting point minimize the geometric error: • Denormalization: ~ ~ ~

  48. Calibration example • Canny edge detection • Straight line fitting to the detected edges • Intersecting the lines to obtain the images corners • typically precision <1/10 • (HZ rule of thumb: 5n constraints for n unknowns

  49. Exterior orientation Calibrated camera, position and orientation unkown  Pose estimation 6 dof  3 points minimal (4 solutions in general)

More Related