1 / 29

Inverse Depth Parameterization for Monocular SLAM Vision Seminar

Inverse Depth Parameterization for Monocular SLAM Vision Seminar. 2009. 3. 25 (Wed) Young Ki Baik. Computer Vision Lab. References. Inverse Depth parameterization for Monocular SLAM J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008)

mauve
Télécharger la présentation

Inverse Depth Parameterization for Monocular SLAM Vision Seminar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inverse Depth Parameterization for Monocular SLAMVision Seminar 2009. 3. 25 (Wed) Young Ki Baik Computer Vision Lab.

  2. References • Inverse Depth parameterization for Monocular SLAM • J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008) • Inverse Depth to Depth Convsrsion for Monocualr SLAM • J. Civera, A. J. Davison, J. M. M. Montiel (ICRA 2007) • Unified Inverse Depth Parameterization for Monocular SLAM • J. M. M. Montiel, J. Civera, A. J. Davison (RSS 2006) Computer Vision Lab.

  3. Outline • What is SLAM? • What is Visual SLAM? • Overall process of SLAM • An issue of the Map • Inverse depth parameterization • Conclusion Computer Vision Lab.

  4. What is SLAM? • SLAM: Simultaneous Localization and Mapping is a technique used by robots and autonomous vehicles to build up a map within an unknown environment while at the same time keeping track of their current position. Where am I ? Observation Map building Computer Vision Lab.

  5. What is SLAM? • SLAM : Simultaneous Localization and Mapping basically uses some statistical techniques based on recursive Bayesian estimation such as Kalman filters and particle filters (aka. Monte Carlo methods). ^$#!@&%? Computer Vision Lab.

  6. What is Visual SLAM? • SLAM : Simultaneous Localization and Mapping can use many different types of sensor to acquire observation data used in building the map such as laser rangefinders, sonar sensors and cameras. • Visual SLAM • - is to use cameras as a sensor. Computer Vision Lab.

  7. Why Visual SLAM? • Vision data can inform us more meaningful information (such as color, texture, shape…) relative to other sensors. Computer Vision Lab.

  8. Overall process of Visual SLAM Initialization Prediction Measurement Map management Update Computer Vision Lab.

  9. Visual SLAM DEMO Mono-slam Computer Vision Lab.

  10. Problems • Proposal • Data association • Filter • Map management • Real-time Computer Vision Lab.

  11. What is the map of visual SLAM? • Map (Landmarks:LM) • Robot (or Camera) Li= (yi, Yi)T + Patch C= (r, q)T y : 3D position of LM r : 3D position Y : 3x3 covariance matrix of LM q : 3D orientation Computer Vision Lab.

  12. What is the map of visual SLAM? • Robot and maps L2= (y2, Y2)T C6D= (r, q)T L1= (y1, Y1)T Computer Vision Lab.

  13. How can we obtain initial LM info.? • Binocular camera case 3D landmarks are directly reconstructed from stereo images since binocular camera retains parallax. C6D= (r, q)T L= (y, Y)T Parallax: The measured angle between the captured rays from different view points Computer Vision Lab.

  14. How can we obtain initial LM info.? • Monocular camera case Is it possible that 3D landmarks are directly reconstructed by monocular camera? ? C6D= (r, q)T L= (y, Y)T Computer Vision Lab.

  15. How can we obtain initial LM info.? • Delayed Initialization of LM location • A batch update [Dean 2000, Bailey 2003] - Large base line will assure high parallax !!! • We can’t always expect large base line !!!→ Problem is distance from camera to LM. Computer Vision Lab.

  16. How can we obtain initial LM info.? • Delayed Initialization of LM location • Gaussian Sum Filter [Kwok 2005, Sola 2005] - Initializing predefined multiple hypothesis at various depths !!! • Pruning those not re-observed in subsequent images !!! • → It can cover the predefined depth only. • → can not cover the distant depth. • → can not cover low parallax cases. Computer Vision Lab.

  17. How can we obtain initial LM info.? • Undelayed Initialization of LM • Inverse Depth Parameterization [Montiel 2006~2008] - Initializing a ray !!! • Updating uncertainty by inverse depth coding !!! • → It can cover the infinity depth. Computer Vision Lab.

  18. How can we obtain initial LM info.? • Undelayed Initialization of LM • Inverse Depth Parameterization [Montiel 2006~2008] • Contribution • * Initializing LM immidiately !!! • * Covering the infinity depth of LM !!! • * Covering the Low parallax case !!! Computer Vision Lab.

  19. Inverse Depth Parameterization • Overview LXYZ= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ,ф) 1/ρ = d α m (x,y,z)T C C6D= (rwc, qwc)T rwc W Computer Vision Lab.

  20. Inverse Depth Parameterization • Definition (Point parameterization) • X-Y-Z Point Parameterization • Inverse Depth Point Parameterization LXYZ= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ, ф) m( cosфsinθ, -sinф, cosфsinθ) LIDP = (x, y, z, θ, ф, ρ)T Computer Vision Lab.

  21. Inverse Depth Parameterization • Definition (Measurement Equation) • X-Y-Z system • Inverse Depth system LXYZ= (X, Y, Z)T = (x, y, z)T + 1/ρ*m(θ, ф) hC= hXYZ = Rcw [ (X, Y, Z)T – rwc] (u, v)T = (u0 – fx hxC / hzC , v0 - fy hyC / hzC ) hC= hρ = Rcw [ρ((x, y, z)T – rwc) + m(θ, ф)] It can be safely used even for points at infinity (ρ=0) !!! Computer Vision Lab.

  22. Inverse Depth Parameterization • Initialization of LM using IDP LIDP = (x, y, z, θ, ф, ρ)T C= (r, q)T LIDP = (r, θ, ф, ρ)T Computer Vision Lab.

  23. Inverse Depth Parameterization • Initialization of LM using IDP LIDP = (x, y, z, θ, ф, ρ)T (u’, v’, 1)T C= (r, q)T Hw = Rwc(u’, v’, 1)T θ = arctan (hxw, hzw)T ф= arctan (-hyw, sqrt(hxw ^2+hzw^2) )T LIDP = (r, θ, ф, ρ)T ρ = 0.1 (or arbitrary constant value) Computer Vision Lab.

  24. Inverse Depth Parameterization • Initialization of LM using IDP Updating state covariance matrix State covariance Measurement covariance Inverse depth variance Computer Vision Lab.

  25. Inverse Depth Parameterization • Switching from Inverse depth to XYZ LIDP LXYZ L= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ,ф) PIDP PXYZ Computer Vision Lab.

  26. Inverse Depth Parameterization • Demo • Monocular SLAM based on EKF Computer Vision Lab.

  27. Inverse Depth Parameterization • Demo • Monocular SLAM based on PF with OIF Computer Vision Lab.

  28. Conclusion • Pros. • IDP is robust for monocular SLAM. • Non-delayed LM initialization • Processing for any point in the scene, close or distant, or even at “infinity” • Dealing simultaneously with low and high parallax case • Cons. • IDP requires 6-D state vector → This doubles the map state vector size Computer Vision Lab.

  29. Q & A Computer Vision Lab.

More Related