1 / 24

Recovering Geometric, Photometric and Kinematic Properties from Images

Recovering Geometric, Photometric and Kinematic Properties from Images. Jitendra Malik Computer Science Division University of California at Berkeley Work supported by ONR, Interval Research, Rockwell, MICRO, NSF, JSEP. Physics of Image Formation. Lighting BRDFs Shape and Spatial layout

thogan
Télécharger la présentation

Recovering Geometric, Photometric and Kinematic Properties from Images

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recovering Geometric, Photometric and Kinematic Properties from Images Jitendra Malik Computer Science Division University of California at Berkeley Work supported by ONR, Interval Research, Rockwell, MICRO, NSF, JSEP

  2. Physics of Image Formation • Lighting • BRDFs • Shape and Spatial layout • Internal DOFs Images

  3. Solving inverse problems requires models • Define suitable parametric models for geometry, lighting, BRDFs, and kinematics. • Recover parameters using optimization techniques. • Humans better at selecting models; computers at recovering parameters.

  4. But there will always be unmodeled detail….. • Models are always approximate. • Adding more parameters doesn’t help; data will be insufficient to recover these parameters.

  5. Hybrid Approaches are best! • ANALYSIS • use images to recover a subset of object parameters. These are chosen judiciously so that they can be recovered robustly • SYNTHESIS • render using appropriately selected images or subimages, transformed using the model.

  6. Talk Outline • Geometry • Debevec, Taylor and Malik, SIGGRAPH 96 • Photometry • Yu and Malik, SIGGRAPH 98 • Debevec and Malik, SIGGRAPH 97 • Kinematics • Bregler and Malik, CVPR 98

  7. Modeling and Rendering Architecture from Photographs George Borshukov Yizhou Yu Paul Debevec Camillo Taylor Jitendra Malik Computer Vision Group Computer Science Division University of California at Berkeley

  8. Overview • Photogrammetric Modeling • Allows the user to construct a parametric model of the scene directly from photographs • Model-Based Stereo • Recovers additional geometric detail through stereo correspondence • View-Dependent Texture-Mapping • Renders each polygon of the recovered model using a linear combination of three nearest views

  9. Our Modeling Method: • The userrepresents the scene as a collection of blocks • The computersolves for the sizes and positions of the blocks according to user-supplied edge correspondences

  10. Block Model User-Marked Edges Recovered Model

  11. Arc de Triomphe Modeled from five photographs by George Borshukov

  12. Surfaces of Revolution Taj Mahal modeled from one photograph by G. Borshukov

  13. Synthetic View Photograph Recovered Model

  14. Recovering Additional Detailwith Model-Based Stereo • Scenes will have geometric detail not captured in the model • This detail can be recovered automatically through model-based stereo

  15. Scene with Geometric Detail Approximate Block Model

  16. Model-Based Stereo • Given a key and an offset image, • Project the offset image onto the model • View the model through the key cameraWarped offset image • Stereo becomes feasible between key and warped offset images because: • Disparities are small • Foreshortening is greatly reduced

  17. Key Image Warped Offset Image Offset Image Disparity Map

  18. Synthetic Views ofRefined Model Four images composited with View-Dependent Texture Mapping

  19. Rendering with View-DependentTexture Mapping • Triangulate the view hemisphere • For each polygon, determine which images viewed it from which angles • Label each triangle vertex according to best viewed image 2 5 1 4 3 view hemisphere

  20. Rendering with View-DependentTexture Mapping • To render, determine to which triangle the viewpoint belongs • Compute Barycentric weights for the triangle vertices • Render the polygon with a weighted average of the three vertex images 2 5 1 4 3 view hemisphere

  21. The Campanile (Debevec et al) • 20 photographs used • approx. 1-2 weeks of modeling time. • Real time rendering

  22. Recovered Campus Model Campanile + 40 Buildings

More Related