1 / 46

Three Dimensional Model Construction for Visualization

Three Dimensional Model Construction for Visualization. Avideh Zakhor. Video and Image Processing Lab University of California at Berkeley avz@eecs.berkeley.edu. Outline. Goals and objectives Previous work by PI Directions for future work. Goals and Objectives.

theta
Télécharger la présentation

Three Dimensional Model Construction for Visualization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Three Dimensional Model Construction for Visualization Avideh Zakhor Video and Image Processing Lab University of California at Berkeley avz@eecs.berkeley.edu

  2. Outline • Goals and objectives • Previous work by PI • Directions for future work

  3. Goals and Objectives • Develop a framework for fast, automatic and accurate 3D model construction for objects, scenes, rooms, buildings (interior and exterior), urban areas, and cities. • Models must be easy to compute, compact to represent and suitable for high quality view synthesis and visualization • Applications: Virtual or augmented reality fly-throughs.

  4. Previous Work on Scene Modeling • Full/Assisted 3-D ModelingKanade et al.; Koch et al.; Becker & Bove; Debevec et al.; Faugeras et al.; Malik & Yu. • Mosaics and PanoramasSzeliski & Kang; McMillan & Bishop; Shum & Szeliski • Layered/LDI RepresentationsWang & Adelson; Sawhney & Ayer; Weiss; Baker et al. • View Interpolation/IBR/Light FieldsChen & Williams; Chang & Zakhor; Laveau & Faugeras; Seitz & Dyer; Levoy & Hanrahan

  5. Previous Work on Building Models • Nevatia (USC): multi-sensor integration • Teller (MIT): spherical mosaics on a wheelchair sized rover, known 6DOF • Van Gool (Belgium): roof detection from aerial photographs • Peter Allen (Columbia): images and laser range finders; view/sensor planning. • Faugeras (INRIA)

  6. Previous Work on City Modeling • Planet 9: • Combines ground photographs with existing city maps manually. • UCLA Urban Simulation Team: • Uses mutligen to create models from aerial photographs, together with ground video for texture mapping. • Bath and London models by Univ. of Bath. • Combines aerial photgraphs with existing maps. • All approaches are slow and labor intensive.

  7. Work at VIP lab at UCB Scene modeling and reconstruction.

  8. Multi-Valued Representation: MVR • Level k has k occluding surfaces • Form multivalued array of depth and intensity

  9. Observations

  10. Imaging geometry (1) • Planar translation

  11. Imaging Geometry (2) • Circular/orbital motion

  12. Dense Depth Estimation • Estimate camera motion • Compute depth maps to build MVRs • Low-contrast regions problematic for dense depth estimation. • Enforce spatial coherence to achieve realistic, high quality visualization.

  13. Block Diagram for Dense Depth Estimation • Planar approximation of depth for low contrast regions.

  14. Oroginal Sequences “Mug” sequence (13 frames) “Teabox” sequence (102 frames)

  15. Low-Contrast Regions • Complete tracking Mug sequence Tea-box sequence

  16. Multiframe Depth Estimation Apply iterative estimation algorithm to enforce piecewise smoothness, without smoothing over depth discontinuities.

  17. Multiframe Depth Estimation Mug Tea-box Multiframe Stereo + Low-Contrast Processing + Piecewise Smoothing Multiframe Stereo + Low-Contrast Processing + Piecewise Smoothing

  18. Multivalued Representation • Project depths to reference coordinates

  19. Multivalued representation for frame 4 (Level 0) Results (1) • Mug sequence

  20. Multivalued representation for frame 4 (Level 1) Results • Mug sequence

  21. Multivalued representation for frame 4 (Combining Levels 0 and 1) Results • Mug sequence

  22. Results • Mug sequence Reconstructed sequence Arbitrary flythrough

  23. Results (2) • Teabox sequence Multivalued representation for frame 22 (Intensity, Level 0)

  24. Results • Teabox sequence Multivalued representation for frame 22 (Depth, Level 0)

  25. Results • Teabox sequence Multivalued representation for frame 22 (Intensity, Level 1)

  26. Results • Teabox sequence Multivalued representation for frame 22 (Depth, Level 1)

  27. Results • Teabox sequence Multivalued representation for frame 22 (Intensity, combining Levels 0 and 1)

  28. Results • Teabox sequence Multivalued representation for frame 22 (Depth, combining Levels 0 and 1)

  29. Results • Teabox sequence Multivalued representation for frame 86 (Intensity, Level 0)

  30. Results • Teabox sequence Multivalued representation for frame 86 (Depth, Level 0)

  31. Results • Teabox sequence Multivalued representation for frame 86 (Intensity, Level 1)

  32. Results • Teabox sequence Multivalued representation for frame 86 (Depth, Level 1)

  33. Results • Teabox sequence Multivalued representation for frame 86 (Intensity, combining Levels 0 and 1)

  34. Results • Teabox sequence Multivalued representation for frame 86 (Depth, combining Levels 0 and 1)

  35. Multiple MVRs • Perform view interpolation w/many MVRs

  36. Results: multiple MVRs • Teabox sequence Reconstructed sequence from MVR86 Reconstruct sequence from MVR22

  37. Results: Multiple MVRs Reconstructed sequence Arbitrary flyaround

  38. Extensions • Complex scenes with many “levels” are difficult to model with MVR; e.g. trees, leaves, etc • Difficult to ensure realistic visualization from all angles; Need to plan capture process carefully. • Tradeoff between CG polygon modeling and IBR; • Use both in real visualization databases. • Build polygon models from MVR.

  39. Issues for model construction • Choice of geometry for obtaining data • Choice of imaging technology. • Choice of representation. • Choice of models. • Dealing with time varying scenes.

  40. Extensions: • So far, addressed “outside in” problem: • Camera looked inward to “scan” the object. • Future work will focus on the “Inside out” problem: • Modeling a room, office. • Modeling exterior or interior of a building • Modeling an urban environment e.g. a city

  41. Strategy • Use: • Range sensors, position sensors (GPS), Gyros(orientation), omni camera, video. • Existing datasets: 3D CAD models, digital elevation maps (DEM), DTED, city maps, architectural drawings: apriori information

  42. Modeling interior of buildings • Leverage existing work in the computer graphics group at UCB: • 3D model of Soda hall available from the “soda walkthrough” project. • 3D model built out of architectural drawings • Use additional video, and laser range finder input to • Enhance the details of the 3D model: furniture, etc • Add texture maps for photo-realistic walk-throughs.

  43. City Modeling • Develop a framework for modeling parts of city of San Francisco: • Use aerial photograph as provided by Space Imaging Corp; resolution 1 ft. • Use digitized city maps • Use ground data collection vehicle to collect range and intensity video from a panoramic camera, annotated with 6 DOF parameters. • Derive data fusion algorithms to process the above in speedy, automated and accurate fashion.

  44. Requirements • Automation (little or no interaction needed from human operators) • Speed: must scale with large areas and large data sets. • Accuracy • Robustness to location of data collection. • Ease of data collection. • Representation suitable to hierarchical visualization databases.

  45. Relationship to others • USC: accurate tracking and registration algorithms needed for model construction. • Syracuse: uncertainty processing, and data fusion for model construction. • G. Tech: How to combine CG polygonal model building with IBR models in vis. database? How can vis. databases deal with photo-realistic rendering?

  46. Conclusions • Fast, accurate and automatic model construction is essential to mobile augmented reality systems. • Our goal is to provide photo-realistic rendering of objects, scenes, buildings, and cities, to enable, visualization, navigation and interaction.

More Related