1 / 27

Lecture 21: Structure and Motion from Video CAP 5415

Lecture 21: Structure and Motion from Video CAP 5415. Today. Focus on a modern system for recovering the camera parameters and scene structure from a video. Video. Basic Steps. Make Initial Matches between Images Estimate Camera Positions Estimate Scene Shape Re-Render.

Télécharger la présentation

Lecture 21: Structure and Motion from Video CAP 5415

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 21: Structure and Motion from Video CAP 5415

  2. Today • Focus on a modern system for recovering the camera parameters and scene structure from a video

  3. Video

  4. Basic Steps • Make Initial Matches between Images • Estimate Camera Positions • Estimate Scene Shape • Re-Render

  5. Making Matches • The first step is to find matching points in the images (All images from Pollefeys)

  6. Making Matches • Requirement: • Find points that can be tracked reliably between frames • remember optical flow?

  7. Let’s analyze what’s going on Window or aperture • All rows are linearly-dependent • Cannot recover motion • One eigenvalue will be zero Edge

  8. Can we recover the motion now? Window or aperture

  9. Finding good points for tracking • Compute this matrix at all points • Look for points where both eigenvalues are large M

  10. Tracking Points • First, identify good points • Track using SSD tracker • Fortunately, the displacements will be small in a video

  11. Next Step:Refine Matches • Need to clean up initial set of matches • Can use geometry • How?

  12. Next Step:Refine Matches • Need to clean up initial set of matches • Can use geometry • Remember Epipolar geometry? • What matrix expresses it?

  13. When the cameras are calibrated • The vectors , , and are coplanar • Can be expressed as (Image from Forsyth and Ponce)

  14. What if the cameras aren't calibrated • The relationship still holds, but we have to calibrate the cameras first. • Those calibration matrices, combined with the essential matrix are known as the fundamental matrix • Encodes information from the intrinsic and extrinsic parameters • Also Rank 2

  15. Finding the fundamental matrix • Basic algorithm: 8-point algorithm • Find 8 corresponding points in the images • Once you have the corresponding p and p' points, • Is linear in F

  16. Using the fundamental matrix • Can use fundamental matrix to identify the epipolar lines • Can use epipolar lines to find bad matches • One problem: • Bad matches lead to poor estimates of fundamental matrix • Solution: Use RANSAC algorithm • Iteratively fits matrices with subsets of points • Finds subsets that satisfy the constraints well

  17. First Phase

  18. Next Step: Estimate Motion • Choose two images • Align world with first camera • The geometry of the second camera can be found from the fundamental matrix

  19. Triangulating Points • Because of noise, points won't line up exactly • Find the 3D location of a pair by minimizing the difference between the projected location and actual location

  20. Triangulating Points • Because of noise, points won't line up exactly • Find the 3D location of a pair by minimizing the difference between the projected location and actual location

  21. Can now add new views

  22. Result • Coarse estimate of geometry of the scene • Camera locations

  23. Next: Dense Geometry • Overview: • Calculate Stereo between pairs of images • Fuse Estimates • Need to rectify images first

  24. Stereo • Optimize stereo along scanlines • Does not consider smoothness across scanlines • Once the depth is fused across views, a mesh can be created

  25. Conclusion • This is a nice system that shows how the concepts that we have studied can be put together into a complete system

More Related