1 / 47

Overview of 3D Scanners

This overview provides an explanation of the different types of data captured by 3D scanners, including point, volumetric, and surface data. It also discusses the advantages and disadvantages of each data type and explores related fields and terminology in 3D scanning.

dfridley
Télécharger la présentation

Overview of 3D Scanners

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of 3D Scanners Acknowledgement: some content and figures by Brian Curless

  2. 3D Data Types • Point Data • Volumetric Data • Surface Data

  3. 3D Data Types: Point Data • “Point clouds” • Advantage: simplest data type • Disadvantage: no information onadjacency / connectivity

  4. 3D Data Types: Volumetric Data • Regularly-spaced grid in (x,y,z): “voxels” • For each grid cell, store • Occupancy (binary: occupied / empty) • Density • Other properties • Popular in medical imaging • CAT scans • MRI

  5. 3D Data Types: Volumetric Data • Advantages: • Can “see inside” an object • Uniform sampling: simpler algorithms • Disadvantages: • Lots of data • Wastes space if only storing a surface • Most “vision” sensors / algorithms returnpoint or surface data

  6. 3D Data Types: Surface Data • Polyhedral • Piecewise planar • Polygons connected together • Most popular: “triangle meshes” • Smooth • Higher-order (quadratic, cubic, etc.) curves • Bézier patches, splines, NURBS, subdivision surfaces, etc.

  7. 3D Data Types: Surface Data • Advantages: • Usually corresponds to what we see • Usually returned by vision sensors / algorithms • Disadvantages: • How to find “surface” for translucent objects? • Parameterization often non-uniform • Non-topology-preserving algorithms difficult

  8. 3D Data Types: Surface Data • Implicit surfaces (cf. parametric) • Zero set of a 3D function • Usually regularly sampled (voxel grid) • Advantage: easy to write algorithms that change topology • Disadvantage: wasted space, time

  9. 2½-D Data • Image: stores an intensity / color alongeach of a set of regularly-spaced rays in space • Range image: stores a depth alongeach of a set of regularly-spaced rays in space • Not a complete 3D description: does notstore objects occluded (from some viewpoint) • View-dependent scene description

  10. 2½-D Data • This is what most sensors / algorithmsreally return • Advantages • Uniform parameterization • Adjacency / connectivity information • Disadvantages • Does not represent entire object • View dependent

  11. 2½-D Data • Range images • Range surfaces • Depth images • Depth maps • Height fields • 2½-D images • Surface profiles • xyz maps • …

  12. Related Fields • Computer Vision • Passive range sensing • Rarely construct complete, accurate models • Application: recognition • Metrology • Main goal: absolute accuracy • High precision, provable errors more important than scanning speed, complete coverage • Applications: industrial inspection, quality control, as-built models

  13. Related Fields • Computer Graphics • Often want complete model • Low noise, geometrically consistent model more important than absolute accuracy • Application: animated CG characters

  14. Terminology • Range acquisition, shape acquisition, rangefinding, range scanning, 3D scanning • Alignment, registration • Surface reconstruction, 3D scan merging, scan integration, surface extraction • 3D model acquisition

  15. Range Acquisition Taxonomy Mechanical (CMM, jointed arm) Inertial (gyroscope, accelerometer) Contact Ultrasonic trackers Magnetic trackers Industrial CT Rangeacquisition Transmissive Ultrasound MRI Radar Non-optical Sonar Reflective Optical

  16. Range Acquisition Taxonomy Shape from X: stereo motion shading texture focus defocus Passive Opticalmethods Active variants of passive methods Stereo w. projected texture Active depth from defocus Photometric stereo Active Time of flight Triangulation

  17. Touch Probes • Jointed arms with angular encoders • Return position, orientation of tip Faro Arm – Faro Technologies, Inc.

  18. Optical Range Acquisition Methods • Advantages: • Non-contact • Safe • Usually inexpensive • Usually fast • Disadvantages: • Sensitive to transparency • Confused by specularity and interreflection • Texture (helps some methods, hurts others)

  19. Stereo • Find feature in one image, search along epipolar line in other image for correspondence

  20. Stereo • Advantages: • Passive • Cheap hardware (2 cameras) • Easy to accommodate motion • Intuitive analogue to human vision • Disadvantages: • Only acquire good data at “features” • Sparse, relatively noisy data (correspondence is hard) • Bad around silhouettes • Confused by non-diffuse surfaces • Variant: multibaseline stereo to reduce ambiguity

  21. Why More Than 2 Views? • Baseline • Too short – low accuracy • Too long – matching becomes hard

  22. Camera 1 Camera 2 Camera 3 Why More Than 2 Views? • Ambiguity with 2 views

  23. Multibaseline Stereo [Okutami & Kanade]

  24. Shape from Motion • “Limiting case” of multibaseline stereo • Track a feature in a video sequence • For n frames and f features, have2nf knowns, 6n+3f unknowns

  25. Shape from Motion • Advantages: • Feature tracking easier than correspondence in far-away views • Mathematically more stable (large baseline) • Disadvantages: • Does not accommodate object motion • Still problems in areas of low texture, in non-diffuse regions, and around silhouettes

  26. Shape from Shading • Given: image of surface with known, constant reflectance under known point light • Estimate normals, integrate to find surface • Problem: ambiguity

  27. Shape from Shading • Advantages: • Single image • No correspondences • Analogue in human vision • Disadvantages: • Mathematically unstable • Can’t have texture • “Photometric stereo” (active method) more practical than passive version

  28. Shape from Texture • Mathematically similar to shape from shading, but uses stretch and shrink of a (regular) texture

  29. Shape from Texture • Analogue to human vision • Same disadvantages as shape from shading

  30. Shape from Focus and Defocus • Shape from focus: at which focus setting is a given image region sharpest? • Shape from defocus: how out-of-focus is each image region? • Passive versions rarely used • Active depth from defocus can bemade practical

  31. Active Optical Methods • Advantages: • Usually can get dense data • Usually much more robust and accurate than passive techniques • Disadvantages: • Introduces light into scene (distracting, etc.) • Not motivated by human vision

  32. Active Variants of Passive Techniques • Regular stereo with projected texture • Provides features for correspondence • Active depth from defocus • Known pattern helps to estimate defocus • Photometric stereo • Shape from shading with multiple known lights

  33. Pulsed Time of Flight • Basic idea: send out pulse of light (usually laser), time how long it takes to return

  34. Pulsed Time of Flight • Advantages: • Large working volume (up to 100 m.) • Disadvantages: • Not-so-great accuracy (at best ~5 mm.) • Requires getting timing to ~30 picoseconds • Does not scale with working volume • Often used for scanning buildings, rooms, archeological sites, etc.

  35. AM Modulation Time of Flight • Modulate a laser at frequencym ,it returns with a phase shift  • Note the ambiguity in the measured phase! Range ambiguity of 1/2mn

  36. AM Modulation Time of Flight • Accuracy / working volume tradeoff(e.g., noise ~ 1/500 working volume) • In practice, often used for room-sized environments (cheaper, more accurate than pulsed time of flight)

  37. Triangulation

  38. Triangulation: Moving theCamera and Illumination • Moving independently leads to problems with focus, resolution • Most scanners mount camera and light source rigidly, move them as a unit

  39. Triangulation: Moving theCamera and Illumination

  40. Triangulation: Moving theCamera and Illumination

  41. Laser Camera Triangulation: Extending to 3D • Possibility #1: add another mirror (flying spot) • Possibility #2: project a stripe, not a dot Object

  42. Triangulation Scanner Issues • Accuracy proportional to working volume(typical is ~1000:1) • Scales down to small working volume(e.g. 5 cm. working volume, 50 m. accuracy) • Does not scale up (baseline too large…) • Two-line-of-sight problem (shadowing from either camera or laser) • Triangulation angle: non-uniform resolution if too small, shadowing if too big (useful range: 15-30)

  43. Triangulation Scanner Issues • Material properties (dark, specular) • Subsurface scattering • Laser speckle • Edge curl • Texture embossing

  44. Multi-Stripe Triangulation • To go faster, project multiple stripes • But which stripe is which? • Answer #1: assume surface continuity

  45. Multi-Stripe Triangulation • To go faster, project multiple stripes • But which stripe is which? • Answer #2: colored stripes (or dots)

  46. Multi-Stripe Triangulation • To go faster, project multiple stripes • But which stripe is which? • Answer #3: time-coded stripes

  47. Time-Coded Light Patterns • Assign each stripe a unique illumination codeover time [Posdamer 82] Time Space

More Related