230 likes | 561 Vues
SIGGRAPH 2000 Course on Image-Based Surface Details. Texture-Mapping Real Scenes from Photographs. Yizhou Yu Computer Science Division University of California at Berkeley. Basic Steps. Acquire Photographs Recover Geometry Align Photographs with Geometry Map Photographs onto Geometry.
E N D
SIGGRAPH 2000 Course on Image-Based Surface Details Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley
Basic Steps • Acquire Photographs • Recover Geometry • Align Photographs with Geometry • Map Photographs onto Geometry
Camera Pose Estimation • Input • Known geometry recovered from photographs or laser range scanners • A set of photographs taken with a camera • Output • For each photograph, 3 rotation and 3 translation parameters of the camera with respect to the geometry • Requirement • 4 correspondences between each photograph and the geometry
Recover Camera Pose with Known Correspondences • Least-squares solution • Needs good initial estimation from human interaction Image Camera
Recover Rotation Parameters only from Known Correspondences • Constraints • Least-squares solution Image Camera
Obtaining Correspondences • Feature Detection in 3D geometry and 2D images • Human interaction • Interactively pick corresponding points in photographs and 3D geometry • Automatic Search • Combinatorial search
Automatic Search for Correspondences • Pose estimation using calibration targets • Combinatorial search for the best match • 4 correspondences each image 3D Targets
Camera Pose Results • Accuracy: consistently within 2 pixels Texture-mapping a single image
Texture Mapping • Conventional texture-mapping with texture coordinates • Projective texture-mapping
Texture Map Synthesis I • Conventional Texture-Mapping with Texture Coordinates • Create a triangular texture patch for each triangle • The texture patch is a weighted average of the image patches from multiple photographs • Pixels that are close to image boundaries or viewed from a grazing angle obtain smaller weights Photograph 3D Triangle Texture Map
Texture Map Synthesis II • Allocate space for texture patches from texture maps • Generalization of memory allocation to 2D • Quantize edge length to a power of 2 • Sort texture patches into decreasing order and use First-Fit strategy to allocate space First-Fit
Texture-Mapping and Object Manipulation Original Configuration Novel Configuration
Texture Map Compression I • The size of each texture patch is determined by the amount of color variations on its corresponding triangles in photographs. • An edge detector (the derivative of the Gaussian) is used as a metric for variations.
Texture Map Compression II • Reuse texture patches • Map the same patch to multiple 3D triangles with similar color variations • K-means clustering to generate texture patch representatives • Larger penalty along triange edges to reduce Mach Band effect 3D Triangles Texture Map
Synthetic Images with Compressed and Uncompressed Texture Maps Compressed 5 texture maps Uncompressed 20 texture maps 20 texture maps 5 texture maps
Projective Texture-Mapping • Can directly use the original photographs in texture-mapping • Visibility processing is more complicated • Projective texture-mapping has been implemented in hardware, therefore, real-time rendering becomes possible • View-dependent effects can be added by effectively using hardware accumulation buffer
Motivation for Visibility Processing: Artifacts Caused by Hardware Camera Image Geometry Texture gets projected onto occluded and backfacing polygons
Visibility Algorithms • Image-space algorithms • Shadow buffer • Ray casting • Object-space algorithms • Weiler-Atherton
A Hybrid Visibility Algorithm • Occlusion testing in image-space using Z-buffer hardware • Render polygons with their identifiers as colors • Retrieve occluding polygons’ ids from color buffer • Object-space shallow clipping to generate fewer polygons
Visibility Processing Results The tower The rest of the campus