220 likes | 340 Vues
Explore the forefront of image-based rendering within computer graphics as practiced by the Department of Computer Science and Engineering at IIT Delhi. This comprehensive overview covers the transition from geometry to images, object space modeling, and depth techniques. Learn about practical applications such as Quicktime VR, layered depth images, and the importance of occlusion determination. Delve into the complexities of capturing radiance, camera positioning, and rendering light fields using multi-camera arrays. Discover the advantages and disadvantages of these innovative rendering methods.
E N D
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi
Image-Based Rendering • So far: • Geometry -> images • Object space model, even volumetric • Image-based rendering: • Image -> Another image • Zoom, Pan etc. • Just image processing?
Images with depth • Quicktime VR: • 2D panoramic photograph • Spin around, zoom in and out • Can add objects closer to viewer • Tour into the picture • Assign depth to parts of the image • One might add objects hidden behind some object in the image • Layered depth images
Image Based Rendering • Store image from every conceivable view • Rendering would reduce to database query • Generality demand infinite sized database • Could store enough images • Given a desired viewpoint (viewmatrix) • Choose an image from a saved view near the desired view • Warp the image • Or, interpolate from nearby known viewpoints
Warp x1 to x2+ Correspondence
General 3D Warp [Courtesy L Mcmillan]
Occlusion Determination • Project the desired center-of-projection onto the reference image
Occlusion Determination • Draw towards the projected point • Guarantees painter’s ordering • Independent of the scene's contents • Generalizes to non-planar viewing surfaces
Radiances in a Scene • Account for all rays • Origin • 3 dimensions • Direction • 2 dimensions • Space of rays is 5 dimensional
Panorama All rays from a single point
Plenoptic Function All rays from all points p =P(Θ, Φ, x, y, z, λ, t) Courtesy L. Mcmillan
Radiances in a Scene II • Account for all rays • Origin • 3 dimensions • Direction • 2 dimensions • Space of rays is 5 dimensional • Radiance is constant along ray • 4 dimensional space • Subject to occlusion
Capturing Radiances • Capture images from many places • Camera positioning • Parameterize the 4D space • Camera position and 2D image? • Sample the 4D space • Coverage and sampling uniformity • Aliasing • Too much data
Representing Scene Radiance • Like texture map • Except ray origin is not fixed • Source and destination of ray varies • 2 coordinates (u,v) for ray origin • 2 coordinates (s,t) for ray destination v t u s [Light-field: Hanrahan & Levoy]
Ray Target Ray Source With four slabs the (r,θ) space is well covered (for an outside looking in case) Sampling Coverage θ θ r r
Stanford Multi-camera Array • 640 × 480 pixels ×30 fps × 128 cameras • Synchronized timing • Continuous streaming • Flexible arrangement
Rendering of Light Fields • For each pixel (x, y) • Compute ray • Map to (u,v,s,t) • Look up “4D” texture • Store as many 2D textures • Quadri-linear interpolation
Good and Bad • Advantages: • Simpler computation vs. traditional CG • Cost independent of scene complexity • Cost independent of material properties and other optical effects • Disadvantages: • Static geometry • Fixed lighting • High storage cost