1 / 68

What does calibration give?

What does calibration give?. An image line l defines a plane through the camera center with normal n=K T l measured in the camera’s Euclidean frame. In fact the backprojection of l is P T l  n= K T l. The image of the absolute conic.

jalen
Télécharger la présentation

What does calibration give?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What does calibration give? An image line l defines a plane through the camera center with normal n=KTl measured in the camera’s Euclidean frame. In fact the backprojection of l is PTl  n=KTl

  2. The image of the absolute conic mapping between p∞ to an image is given by the planar homogaphy x=Hd, with H=KR absolute conic (IAC), represented by I3 withinp∞ (w=0) its image (IAC) • IAC depends only on intrinsics • angle between two rays • DIAC=w*=KKT • w  K (Cholesky factorization) • image of circular points belong to w (image of absolute conic)

  3. A simple calibration device • compute Hi for each square • (corners  (0,0),(1,0),(0,1),(1,1)) • compute the imaged circular points Hi [1,±i,0]T • fit a conic w to 6 imaged circular points • compute K from w=K-T K-1 through Cholesky factorization (= Zhang’s calibration method)

  4. Orthogonality relation

  5. Calibration from vanishing points and lines

  6. Calibration from vanishing points and lines

  7. Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction Structure comp.

  8. Three questions: • Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point x’ in the second image? • (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? • (iii) Scene geometry (structure): Given corresponding image points xi ↔x’i and cameras P, P’, what is the position of (their pre-image) X in space?

  9. The epipolar geometry C,C’,x,x’ and X are coplanar

  10. The epipolar geometry What if only C,C’,x are known?

  11. The epipolar geometry All points on p project on l and l’

  12. The epipolar geometry Family of planes p and lines l and l’ Intersection in e and e’

  13. The epipolar geometry epipoles e,e’ = intersection of baseline with image plane = projection of projection center in other image = vanishing point of camera motion direction an epipolar plane = plane containing baseline (1-D family) an epipolar line = intersection of epipolar plane with image (always come in corresponding pairs)

  14. Example: converging cameras

  15. Example: motion parallel with image plane

  16. The fundamental matrix F algebraic representation of epipolar geometry we will see that mapping is (singular) correlation (i.e. projective mapping from points to lines) represented by the fundamental matrix F

  17. The fundamental matrix F algebraic derivation e’ is the image of C taken by the second camera

  18. The fundamental matrix F correspondence condition The fundamental matrix satisfies the condition that for any pair of corresponding points x↔x’ in the two images

  19. The fundamental matrix F F is the unique 3x3 rank 2 matrix that satisfies x’TFx=0 for all x↔x’ • Transpose: if F is fundamental matrix for (P,P’), then FT is fundamental matrix for (P’,P) • Epipolar lines: l’=Fx & l=FTx’ • Epipoles: on all epipolar lines, thus e’TFx=0, x e’TF=0, similarly Fe=0; •  e’ is left null space of F • F has 7 d.o.f. , i.e. 3x3-1(homogeneous)-1(rank2) • F is a correlation, projective mapping from a point x to a line l’=Fx (not a proper correlation, i.e. not invertible)

  20. Projective transformation and invariance Derivation based purely on projective concepts F invariant to transformations of projective 3-space unique not unique canonical form

  21. ~ ~ Show that if F is same for (P,P’) and (P,P’), there exists a projective transformation H so that P=HP and P’=HP’ ~ ~ Projective ambiguity of cameras given F previous slide: at least projective ambiguity this slide: not more! lemma: (22-15=7, ok)

  22. F matrix, S skew-symmetric matrix (fund.matrix=F) Possible choice: Canonical representation: Canonical cameras given F F matrix corresponds to P,P’ iff P’TFP is skew-symmetric

  23. PROJECTIVE RECONSTRUCTION Computation of F1i Ransac: - 8 point correspondence samples Computation of canonical cameras from F1i Triangulation of points in 3D • -Compute viewing rays from cameras • Intersect viewing rays associated to • corresponding points

  24. The essential matrix ~fundamental matrix for calibrated cameras (remove K) 5 d.o.f. (3 for R; 2 for t up to scale) E is essential matrix if and only if two singularvalues are equal (and third=0) SVD

  25. Motion from E Given Four solutions and

  26. Four possible reconstructions from E (only one solution where points is in front of both cameras)

  27. Self-calibration

  28. Motivation • Avoid explicit calibration procedure • Complex procedure • Need for calibration object • Need to maintain calibration

  29. Motivation • Allow flexible acquisition • No prior calibration necessary • Possibility to vary intrinsics • Use archive footage

  30. Constraints ? • Scene constraints • Parallellism, vanishing points, horizon, ... • Distances, positions, angles, ... Unknown scene  no constraints • Camera extrinsics constraints • Pose, orientation, ... Unknown camera motion  no constraints • Camera intrinsics constraints • Focal length, principal point, aspect ratio & skew Perspective camera model too general  some constraints

  31. Constraints on intrinsic parameters Constant e.g. fixed camera: Known e.g. rectangular pixels: square pixels: principal point known:

  32. Self-calibration Upgrade from projective structure to metric structure using constraintsonintrinsic camera parameters • Constant intrinsics • Some known intrinsics, others varying • Constraints on intrincs and restricted motion (e.g. pure translation, pure rotation, planar motion) (Moons et al.´94, Hartley ´94, Armstrong ECCV´96) (Faugeras et al. ECCV´92, Hartley´93, Triggs´97, Pollefeys et al. PAMI´98, ...) (Heyden&Astrom CVPR´97, Pollefeys et al. ICCV´98,...)

  33. A counting argument • To go from projective (15DOF) to metric (7DOF) at least 8 constraints are needed • Minimal sequence length should satisfy • Independent of algorithm • Assumes general motion (i.e. not critical)

  34. rectangular pixels square pixels known internal parameters

  35. Same camera for all images same intrinsics  same image of the absolute conic e.g. moving cameras given sufficient images there is in general only one conic that projects to the same image in all images, i.e. the absolute conic This approach is called self-calibration, see later transfer of IAC:

  36. Compute F from xi↔x‘i • Compute P,P‘ from F • Triangulate Xi from xi↔x‘i Obtain projective reconstruction (P,P‘,{Xi}) e.g.

  37. The metric reconstruction will be  t=0 Let the unknown plane at the infinity in the starting projective reconstruction be From

  38. and Therefore, thus

  39. Now Therefore, since Self-calibration equation and 8 unknowns:

  40. Equations come from constraints on intrinsics: • known parameters (e.g. s=0 or square pixels) • fixed parameters (e.g. s, or aspect ratio) needed views: n, with Solution: nonlinear algorithms (Cipolla)

  41. Uncalibrated visual odometry for ground plane motion (joint work with Simone Gasparini)

  42. Problem formulation • Given: • an uncalibratedcamera mounted on a robot • the camera is fixed and aims at the floor • the robot moving on a planar floor (ground plane) • Determine: • the estimate of robot motion from observed features on the floor

  43. Technique involved • Estimate the ground plane transformation (homography) between images taken before and after robot displacement

  44. Motivations • Dead reckoning techniques are not reliable and diverge after few steps [Borenstein96] • Visual odometry techniques exploits cameras to recover motion • We use a single uncalibrated camera • 3D reconstruction with uncalibrated camera usually require auto-calibration • Non-planar motion is required [Triggs98] • Planar motion with different camera attitudes [Knight03] • Special devices is required (e.g. PTZ cameras) • Stereo cameras approaches [Nister04, Takaoka04, Agrawal06, Cheng06] • Catadioptric cameras approaches [Bunschoten03, Corke04] • Assume Single View Point (difficult setup) • Our method similar in spirit to [Wang05] and [Benhimane06] • We do not assume camera calibration

  45. Problem formulation • Fixed uncalibrated camera mounted on a robot • The pose of the camera w.r.t. the robot is unknown • The projective transformation between ground plane and image plane is a homography T (3x3) • T does not change with robot motion • T is uknown

More Related