200 likes | 316 Vues
This paper explores the concept of treating images as first-class primitives in computer graphics. By enabling images to serve as both input and output, the paper discusses methodologies to seamlessly blend real and synthetic scenes. It emphasizes capturing essential scene attributes—shape, movement, lighting, and surface BRDF—while addressing challenges such as occlusion and depth estimation. Techniques like Light Probes and texture mapping are proposed to enhance rendering results. Ultimately, the goal is to depict comprehensive scene data, push the boundaries of photography, and address the inherent complexities of image capture and scene rendering.
E N D
CS 395: Adv. Computer Graphics Image-Based Modeling and Rendering Jack Tumblin jet@cs.northwestern.edu
GOAL: First-Class Primitive • Want images as ‘first-class’ primitives • Useful as BOTH input and output • Convert to/from traditional scene descriptions • Want to mix real & synthetic scenes freely • Want to extend photography • Easily capture scene:shape, movement, surface/BRDF, lighting … • Modify & Render the captured scene data • “You can’t always get what you want”–(Mick Jagger 1968)
Back To Basics: Scene & Image Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Image Plane I(x,y) Position(x,y) Angle(,)
Trad. Computer Graphics Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Reduced, Incomplete Information Image Plane I(x,y) Position(x,y) Angle(,)
Trad. Computer Vision Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point !TOUGH! ‘ILL-POSED’ Many Simplifications, External knowledge… Image Plane I(x,y) Position(x,y) Angle(,)
Plenoptic Function (Adelson, Bergen `91) • for a given scene, describe: • ALLrays through • ALLpixels, of • ALL cameras, at • ALL wavelengths, • ALL time F(x,y,z,,,, t) “Eyeballs Everywhere” function (7-D!) … … … … … … … … … … …
A Big Plenoptic Question: Image entraps a partial scene description,… • Computer Vision problem: 3D->2D • Image point scene surface point (usually) • Occlusion hides some scene surfaces • (BRDF * irradiance) tough to split apart! ? Does Plenoptic fcn. contain full scene ? • Exhaustive record of all image rays • Even SIMPLEST scene is huge, redundant, • The ‘consequences’ of all possible renderings* so
‘Scene’ causes Light Field Light field: holds all outgoing light rays Shape, Position, Movement, Emitted Light Reflected, Scattered, Light … BRDF, Texture, Scattering Scene modulates outgoing light; light field captures it all.
A Big Plenoptic Question: Image entraps a partial scene description • Many-to-One map; 3D->2D • Occlusion hides some scene features • (BRDF * irradiance) tough to split! • limited resolution ? Does Plenoptic fcn. contain full scene ? Two Options for IBMR methods: • Find a limited subset of scene info, • Use MORE than plenoptic function data: (vary lights, etc.) !NO!
8-to-10-Dimensional Ideal? Light field(4D) + light sources(4D) + time + Emitted Light Shape, Position, Movement, Reflected, Scattered, Light … BRDF, Texture, Scattering
It gets worse… A ‘Circular problem’: PLUS! depth-of-focus, sampling, indirect illum… SurfaceNormal BRDF Shape Irradiance
Practical IBMR What useful partial solutions are possible? • Texture Maps++: • Image(s)+Depth: (3D shell) • Estimating Depth & Silhouettes • ‘Light Probe’ measures real-world light • Light control measures BRDF • Hybrids: BTF, stitching, …
Texture Maps ++ Re-use rendering results: ‘Impostors’, ‘Billboards’, ‘3D sprites’ • Render portion of scene as a texture • Apply to mesh or plane to C.O.P.; • Replace if eyepoint changes too much
Images + Depth • 1 Image + Depth: a ‘thin shell’ • Reprojection (well known); Z-buffers can help • McMillan`95: 4-way raster ensures depth order • Problem: ‘holes’, occlusion, matching • Multiple Images: • LDI, LDI trees for multiresolution • Limitations: • Presumes diffuse-only environment • Depth capture tough: laser TOF reflectometer, manual scanner, structured light, or …
Shape Problems: Correspondence Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)
Shape Problems: Correspondence Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)
Estimating Depth, Silhouettes Mildly new IBMR methods can help… • Sparse, manual image correspondences (Debevec, Seitz,) • Video sequences with camera motion tracking • Image (silhouette)-based Visual Hulls, ‘voxel carving’ (VIDEO!) Mostly a Classic Computer Vision Problem: • Epipolar Geometry: reduce search for correspondences • Global & local tracking & alignment methods…
Light Probe: Irradiance Estimate • Place mirrored ball in scene, • Photograph (careful! High contrast image!) • Map position on sphere to incoming angle, intensity to irradiance. • Repeat where illumination changes greatly (in shadows, etc.) Uses: -mixing real & synthetic objects (Ward 96) -separating reflectance & illum (Yu 97)
Light Control Methods Form estimates of surface properties (BRDF vs. position) by moving camera, light source, or both. • Carefully control incoming light direction (light stages, whirling banks of lights, etc) • Establish surface geometry (before, during) • Sort pixels by incoming/outgoing surf. Angle • Scattered data interpolation to get BRDF.
Conclusion • Very active area • Heavy overlap with computer vision: careful not to re-invent & re-name! • Compute-intensive, but easily parallel; applies graphics hardware to broader probs.