1 / 23

RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments

RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments. Peter Henry 1 , Michael Krainin 1 , Evan Herbst 1 , Xiaofeng Ren 2 , and Dieter Fox 1,2 1 University of Washington Computer Science and Engineering 2 Intel Labs Seattle. RGB-D Data. Related Work. SLAM

nida
Télécharger la présentation

RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RGB-D Mapping:Using Depth Cameras for Dense 3D Modeling of Indoor Environments Peter Henry1, Michael Krainin1, Evan Herbst1, Xiaofeng Ren2, and Dieter Fox1,2 1University of Washington Computer Science and Engineering 2Intel Labs Seattle

  2. RGB-D Data

  3. Related Work • SLAM • [Davison et al, PAMI 2007] (monocular) • [Konolige et al, IJR 2010] (stereo) • [Pollefeys et al, IJCV 2007] (multi-view stereo) • [Borrmann et al, RAS 2008] (3D laser) • [May et al, JFR 2009] (ToF sensor) • Loop Closure Detection • [Nister et al, CVPR 2006] • [Paul et al, ICRA 2010] • Photo collections • [Snavely et al, SIGGRAPH 2006] • [Furukawa et al, ICCV 2009]

  4. System Overview • Frame-to-frame alignment • Loop closure detection and global consistency • Map representation

  5. Alignment (3D SIFT RANSAC) • SIFT features (from image) in 3D (from depth) • RANSAC alignment requires no initial estimate • Requires sufficient matched features

  6. RANSAC Failure Case

  7. Alignment (ICP) • Does not need visual features or color • Requires sufficient geometry and initial estimate

  8. ICP Failure Case

  9. Joint Optimization (RGBD-ICP) Fixed feature matches Dense point-to-plane

  10. Loop Closure • Sequential alignments accumulate error • Revisiting a previous location results in an inconsistent map

  11. Loop Closure Detection • Detect by running RANSAC against previous frames • Pre-filter options (for efficiency): • Only a subset of frames (keyframes) • Every nth frame • New Keyframe when RANSAC fails directly against previous keyframe • Only keyframes with similar estimated 3D pose • Place recognition using vocabulary tree

  12. Loop Closure Correction • TORO [Grisetti 2007]: • Constraints between camera locations in pose graph • Maximum likelihood global camera poses

  13. Map Representation: Surfels • Surface Elements[Pfister 2000, Weise 2009, Krainin 2010] • Circular surface patches • Accumulate color / orientation / size information • Incremental, independent updates • Incorporate occlusion reasoning • 750 million points reduced to 9 million surfels

  14. Experiment 1: RGBD-ICP • 16 locations marked in environment (~7m apart) • Camera on tripod carried between and placed at markers • Error between ground truth distance and the accumulated distance from frame-to-frame alignment

  15. Experiment 2: Robot Arm • Camera held in WAM arm for ground truth pose

  16. Map Accuracy: Architectural Drawing

  17. Map Accuracy: 2D Laser Map

  18. Conclusion • Kinect-style depth cameras have recently become available as consumer products • RGB-D Mapping can generate rich 3D maps using these cameras • RGBD-ICP combines visual and shape information for robust frame-to-frame alignment • Global consistency achieved via loop closure detection and optimization (RANSAC, TORO, SBA) • Surfels provide a compact map representation

  19. Open Questions • Which are the best features to use? • How to find more loop closure constraints between frames? • What is the right representation (point clouds, surfels, meshes, volumetric, geometric primitives, objects)? • How to generate increasingly photorealistic maps? • Autonomous exploration for map completeness? • Can we use these rich 3D maps for semantic mapping?

  20. Thank You!

More Related