1 / 57

Automated 3D Model Construction for Urban Environments

Next Generation 4D Distributed Modeling and Visualization. Automated 3D Model Construction for Urban Environments. Christian Frueh John Flynn Avideh Zakhor. University of California, Berkeley. June 13, 2002. Presentation Overview. Introduction Ground based modeling Mesh processing

troy-walls
Télécharger la présentation

Automated 3D Model Construction for Urban Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Next Generation 4D Distributed Modeling and Visualization Automated 3D Model Construction for Urban Environments Christian Frueh John Flynn Avideh Zakhor University of California, Berkeley June 13, 2002

  2. Presentation Overview • Introduction • Ground based modeling • Mesh processing • Airborne modeling • Aerial photos • Airborne laser scans • 3D Model Fusion • Rendering • Conclusion and Future Work

  3. Introduction Goal: Generate 3D model of a city for virtual walk/drive/fly-thrus and simulations • Fast • Automated • Photorealistic needed: For Fly-Thru: For Walk/Drive-Thru: • 3D Model of terrain and buildings tops & sides • coarse resolution • 3D model of street scenery & building façades • highly detailed

  4. Airborne Modeling Ground Based Modeling • Laser scans/images from plane • Laser scans & images from acquisition vehicle Fusion Complete 3D City Model Introduction 3D Model of terrain and building tops 3D model of building façades

  5. Airborne Modeling Acquisition of terrain shape and top-view building geometry Goal: • Available Data: • Aerial Photos • Airborne laser scans Texture: from aerial photos Geometry: 2 approaches: I) stereo matching of photos II) airborne laser scans

  6. Airborne Modeling Approach I : Stereo Matching (last year) Stereo photo pairs from city/urban areas, ~ 60% overlap Semi-Automatic Manual: Automated: • Segmentation • Camera parameter computation, • Matching, • Distortion reduction, • Model generation

  7. Stereo Matching Stereo pair from downtown Berkeley and the estimated disparity after removing perspective distortions

  8. Stereo Matching Results Downtown Oakland

  9. Airborne Modeling Approach II: Airborne Laser Scans Scanning city from plane • Resolution 1 scan point/m2 • Berkeley: 40 Million scan points point cloud

  10. Airborne Laser Scans • Re-sampling point cloud • Sorting into grid • Filling holes Map-like height field usable for: • Monte Carlo Localization • Mesh Generation

  11. Textured Mesh Generation 1. Connecting grid vertices to mesh 2. Applying Q-slim simplification 3. Texture mapping: • Semi-automatic • Manual selection of few correspondence points: 10 mins/entire Berkeley • Automated camera pose estimation • Automated computation of texture for mesh

  12. Airborne Model East Berkeley campus with campanile

  13. Airborne Model Downtown Berkeley http://www-video.eecs.berkeley.edu/~frueh/3d/airborne/

  14. Ground Based Modeling buildings 2D laser v truck z u y  x Acquisition of highly detailed 3D building façade models Goal: • Scanning setup • vertical 2D laser scanner for geometry capture • horizontal scanner for pose estimation • Acquisition vehicle • Truck with rack: • 2 fast 2D laser scanners • digital camera

  15. (ui, vi, i) (ui-1, vi-1, i-1) … (u2, v2, 2) (u1, v1, 1) Scan Matching & Initial Path Computation Horizontal laser scans: • Continuously captured during vehicle motion • Overlap Relative position estimation by scan-to-scan matching Translation (u,v) Rotation  Adding relative steps (ui, vi, i) t = t0  t = t1 (u, v) path (xi,yi,i) Scan matching 3 DOF pose (x, y, yaw)

  16. 6 DOF Pose Estimation From Images • Scan matching cannot estimate vertical motion • Small bumps and rolls • Slopes in hill areas • Full 6 DOF pose of the vehicle is important; affects: • Future processing of the 3D and intensity data • Texture mapping of the resulting 3D models • Extend initial 3 DOF pose by deriving missing 3 DOF (z, pitch, roll) from images

  17. 6 DOF Pose Estimation From Images Central idea: photo-consistency • Each 3D scan point can be projected into images using initial 3 DOF pose • If pose estimate is correct, point should appear the same in all images • Use discrepancies in projected position of 3D points within multiple images to solve for the full pose

  18. 6 DOF Pose Estimation – Algorithm • 3DOF of laser as initial estimate • Project scan points into both images • If not consistent, use image correlation to find correct projection • Ransac used for robustness

  19. 6 DOF Pose Estimation – Results with 3 DOF pose with 6 DOF pose

  20. 6 DOF Pose Estimation – Results

  21. Monte Carlo Localization (1) Previously: Global 3 DOF pose correction using aerial photography a) path before MCL correction b) path after MCL correction After correction, points fit to edges of aerial image

  22. Monte Carlo Localization (2) Extend MCL to work with airborne laser data and 6 DOF pose Now: No perspective shifts of building tops, no shadow lines • Fewer particles necessary, increased computation speed • Significantly higher accuracy near high buildings and tree areas Use terrain shape to estimate z coordinate of truck • Correct additional DOF for vehicle pose (z, pitch, roll) • Modeling not restricted to flat areas

  23. Monte Carlo Localization (3) Track global 3D position of vehicle to correct relative 6 DOF motion estimates Resulting corrected path overlaid with airborne laser height field

  24. Segment path into quasi-linear pieces • Cut path at curves and empty areas • Remove redundant segments Path Segmentation 24 mins, 6769 meters vertical scans:107,082 scan points: ~ 15 million Too large to process as one block!

  25. Path Segmentation Resulting path segments overlaid with edges of airborne laser height map

  26. Simple Mesh Generation

  27. Side views look “noisy” Remove foreground: extract facades Simple Mesh Generation Triangulate Point cloud Mesh • Problem: • Partially captured foreground objects • erroneous scan points due to glass reflection

  28. split depth main depth local minimum 2. Histogram analysis over vertical scans split depth main depth depth value sn,υfor a scan point Pn,υ depth scanner scan nr ground points depth Façade Extraction and Processing (1) 1. Transform path segment into depth image

  29. Façade Extraction and Processing (2) 3. Separate depth image into 2 layers: foreground=trees, cars etc. background=building facades

  30. Façade Extraction and Processing (3) 4. Process background layer: • Detect and remove invalid scan points • Fill areas occluded by foreground objects by extending geometry from boundaries • Horizontal, vertical, planar interpolation, RANSAC • Apply segmentation • Remove isolated segments • Fill remaining holes in large segments • Final result: “clean” background layer

  31. Façade Extraction – Examples (1) with processing without processing

  32. Façade Extraction – Examples (2) without processing with processing

  33. Façade Extraction – Examples (3) without processing with processing

  34. Facade Processing

  35. Foreground Removal

  36. Mesh Generation Downtown Berkeley

  37. Automatic Texture Mapping (1) Camera calibrated and synchronized with laser scanners Transformation matrix between camera image and laser scan vertices can be computed 1. Project geometry into images 2. Mark occluding foreground objects in image 3. For each background triangle: Search pictures in which triangle is not occluded, and texture with corresponding picture area

  38. Typical texture reduction: factor 8..12 Automatic Texture Mapping (2) Efficient representation: texture atlas Copy texture of all triangles into “mosaic” image

  39. Texture synthesis: preliminary • Mark holes corresponding to non-textured triangles in the atlas • Search the image for areas matching the hole boundaries • Fill the hole by copying missing pixels from these image Automatic Texture Mapping (3) Large foreground objects: Some of the filled-in triangles are not visible in any image! “texture holes” in the atlas

  40. Texture holes filled Automatic Texture Mapping (4) Texture holes marked

  41. Automatic Texture Mapping (5)

  42. Ground Based Modeling - Results Façade models of downtown Berkeley

  43. Ground Based Modeling - Results Façade models of downtown Berkeley

  44. Model Fusion Fusion of ground based and airborne model to one single model Goal: Façade model Airborne model Model Fusion: • Registration of models • Combining the registered meshes

  45. Which model to use where? Registration of Models Models are already registered with each via Monte-Carlo-Localization !

  46. Preparing Ground Based Models Intersect path segments with each other Remove degenerated, redundant triangles in overlapping areas original mesh redundant triangles removed

  47. Preparing Airborne Model Ground based model has 5-10 times higher resolution • Remove facades in airborne model where ground based geometry is available • Add ground based façades • Fill remaining gaps with a “blend mesh” to hide model transitions

  48. Preparing Airborne Model Initial airborne model

  49. Preparing Airborne Model Remove facades where ground based geometry is available

  50. Combining Models Add ground based façade models

More Related