1 / 34

KinectFusion : Real-Time Dense Surface Mapping and Tracking

KinectFusion : Real-Time Dense Surface Mapping and Tracking. IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings (Best paper reward). Target. Greyscales. Normal maps. Noisy data. Outline. Introduction Motivation Background System diagram

mulan
Télécharger la présentation

KinectFusion : Real-Time Dense Surface Mapping and Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings(Best paper reward)

  2. Target Greyscales Normal maps Noisy data

  3. Outline • Introduction • Motivation • Background • System diagram • Experiment results • Conclusion

  4. Introduction • Passive camera • Simultaneous localization and mapping (SLAM) • Structure from motion (SFM) • MonoSLAM [8] (ICCV 2003) • Parallel Tracking and Mapping [17] (ISMAR 2007) • Disparity • Depth model [26] (2010) • Pose of camera from Depth models [20] (ICCV 2011)

  5. Motivation • Active camera : Kinect sensor • Pose estimation from depth information • Real-time mapping • GPU

  6. Background- Camera sensor • Kinect Sensor • Infra-red light • Input Information • RGB image(1) • Raw depth data • Calibrated depth image(2) (1) (2)

  7. Background – Pose estimation • Depth maps from two views • Iterative closest points (ICP) [7] • Point-plane metric [5] ICP

  8. Background – Pose estimation • Projective data association algorithm [4]

  9. Background – Scene Representation • Volume of space • Signed distance function [7]

  10. System Diagram

  11. System Diagram

  12. Pre-defined parameter • Pose estimation with sensor camera • Raw depth map Rk • Calibrated depth image Rk(u) where and Raw data Rk K Rk(u)

  13. Surface Measurement • Reduce noise • Bilateral filter With bilateral filter Without bilateral filter

  14. Surface Measurement • Vertex map • Normal vector

  15. Define camera pose Camera frame k is transferred into the global frame

  16. System Diagram

  17. Surface Reconstruction : Operate environment L L L L3 voxel reconstruction

  18. Surface Reconstruction • Signed distance function

  19. Truncated Signed Distance Function -v +v Axis x sensor Surface Fk(p) +v 0 Axis x -v

  20. Weighting running average • Dynamic object motion

  21. System Diagram

  22. Surface Prediction from Ray Casting • Store • Ray casting marches from +v to zero-crossing Corresponding ray

  23. Surface Prediction from Ray Casting • Speed-up • Ray skipping • Truncation distance Axis x sensor Surface

  24. System Diagram

  25. Sensor Pose Estimation • Previous frame • Current frame • Assume small motion frame • Fast projective data association algorithm • Initialized with previous frame pose where

  26. Vertex correspondences where • Point-plane energy

  27. For z > 0 • Modified equation where

  28. Experiment Results • Reconstruction resolution : 2563 • Test camera pose • kinect camera rotates and captures 560 frame over 19 seconds in turntable

  29. Experiment Results • Using every 8th frame

  30. Experiment Results : Processing time Pre-processing raw data, data-associations; pose optimisations; raycasting the surface prediction and surface measurement integration Demo

  31. Conclusion • Robust tracking of camera pose by all aligning all depth points • Parallel algorithms for both tracking and mapping

  32. Reference [8] A. J. Davison. Real-time simultaneous localization and mapping with a single camera. In Proceedings of the International Conference on Computer Vision (ICCV), 2003. [17] G. Klein and D. W. Murray. Parallel tracking and mapping for small AR workspaces. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2007. [26] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Proceedings of the DAGM Symposium on Pattern Recognition, 2010.

  33. [20] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. DTAM: Dense tracking and mapping in real-time. In Proceedings of the International Conference on Computer Vision (ICCV), 2011 [7] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In ACM Transactions on Graphics (SIGGRAPH), 1996. [5] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. Image and Vision Computing (IVC), 10(3):145–155, 1992. [4] G. Blais and M. D. Levine. Registering multiview range data to create 3D computer objects. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 17(8):820–824, 1995.

More Related