1 / 18

ROBOT VISION Lesson 1a: Structured Light 3D Reconstruction Matthias Rüther, Christian Reinbacher

ROBOT VISION Lesson 1a: Structured Light 3D Reconstruction Matthias Rüther, Christian Reinbacher. Structured Light Methods. Goal: Robust 3D Reconstruction through triangulation Project artificial pattern on the object Pattern alleviates the correspondence problem Variants:

Télécharger la présentation

ROBOT VISION Lesson 1a: Structured Light 3D Reconstruction Matthias Rüther, Christian Reinbacher

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ROBOT VISIONLesson 1a: Structured Light 3D ReconstructionMatthias Rüther, Christian Reinbacher

  2. Structured Light Methods • Goal: Robust 3D Reconstruction through triangulation • Project artificial pattern on the object • Pattern alleviates the correspondence problem • Variants: • Laser Pattern (point, line) • Structured projector pattern (several lines, pattern sequence) • Random projector pattern

  3. Structured Light Range Finder 1. Sender (projects plane) 2. Receiver (CCD Camera) Sensor image Geometry Z- direction X- direction

  4. 1 plane -> 1 object profile • To get a 3D profile: • Move the object • Scanning Unit for projected plane • Move the Sensor Object motion by conveyor band: => synchronization: measure distance along conveyor => y-accuracy determined by distance measurement Scanning Units (e.g.: rotating mirror) are rare (accurate measurement of mirror motion is hard, small inaccuracy there -> large inaccuracy in geometry Move the sensor: e.g. railways: sensor in wagon coupled to speed measurement

  5. Commercially Available Person Scanners Cultural Heritage Rapid Prototyping

  6. Problems of Laser Profile • Occlusions: Object points need to be seen from Laser and Camera viewpoint • Sharpness and Contrast: Both camera and laser need to be in focus • Speckle noise: Laser always shows “speckle noise”, caused by interference of coherent light. -> where is the center of the stripe?

  7. Multiple Sheets of Light Project multiple Laser planes simultaneously to reduce measurement time. Problem: Separation of stripes in the image Application: Smoothness check of flat surfaces

  8. Pattern projection Range Image Projected light stripes • Camera: IMAG CCD, Res:750x590, f:16 mm • Projector: Liquid Crystal Display (LCD 640), f: 200mm, Distance to object plane: 120cm

  9. Projector Lamp Lens system LCD - Shutter Pattern structure Line projector (e.g.: LCD-640) Focusing lens (e.g.: 150mm) Example

  10. Depth decoding Project Temporal sequenceof n binarymasks. Ateachpixel, the temporal sequenceofintensities (I1, …, In) gives a binarynumberwhichdenotedthecorrespondingprojectorcolumn. Project  Acquire  Decode  Triangulate

  11. Coded Light + Phase Shift Binary code is limited to pixel accuracy (or less). Increase accuracy to sub-pixel by projecting sine wave after code and measuring phase shift between projected and captured pattern. Decode phase from four samples of sine period, shifted by pi/2.

  12. Coded Light + Phase Shift Increase accuracy to sub-pixel by projecting sine wave after code and measuring phase shift between projected and captured pattern. Decode phase from four samples of sine period, shifted by pi/2. code Image column (x) phase + 2 0 Image column (x)

  13. Other Coding Methods Possible Joaquim Salvi, Pattern codification strategies in structured light systems

  14. The Kinect Working Principle • Triangulation based depth sensor • Static pattern projection • Heavy exploitation of redundancy • Extremely robust/conservative depth maps

  15. The Sensor System IR Lens: F~6mm FOV~55° Diffractive Optical Element (DOE) Laser 830nm, 60mW class 3B without optics, 1 with optics, no amplitude modulation IR Bandpass RGB Lens: F~2.9mm, FOV~65° IR Camera: CMOS, rolling shutter, 1.3MP, ½“, 10bit RGB Camera: CMOS, rolling shutter, 1.3MP, 1/4“, 10bit Peltier Element Temperature Stabilization Stereo Processor Microphone Array Accelerometer Tilt Axis

  16. The Sensor System • Tx ~75mm • DOF 0.5m – 8m • FOV ~55° • Res. 640x480 (at most) • Internal max 1280x1024 Stereo Processor Microphone Array Accelerometer Tilt Axis

  17. The Projection Pattern IR Laser and Diffractive Optical Element create interference pattern Pattern is static and identical for all Kinects

More Related