1 / 34

Exploitation of 3D Video Technologies

Exploitation of 3D Video Technologies. Takashi Matsuyama. Graduate School of Informatics, Kyoto University. 12 th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04). Outline. Introduction 3D Video Generation

skyla
Télécharger la présentation

Exploitation of 3D Video Technologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04)

  2. Outline • Introduction • 3D Video Generation • Deformable Mesh Model • Texture Mapping Algorithm • Editing and Visualization System • Conclusion

  3. Introduction • PC cluster for real-time active 3D object shape reconstruction

  4. 3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping

  5. Silhouette Volume Intersection

  6. Silhouette Volume Intersection • Plane-to-Plane Perspective Projection • 3D voxel space is partitioned into a group of parallel planes

  7. Plane-to-Plane Perspective Projection • Project the object silhouette observed by each camera onto a common base plane • Project each base plane silhouette onto the other parallel planes • Compute 2D intersection of all silhouettes projected on each plane

  8. Linearized Plane-to-Plane Perspective Projection(LPPPP) algorithm

  9. Parallel Pipeline Processing on a PC cluster system With camera Silhouette Extraction Projection to the Base-Plane Base-Plane Silhouette Duplication Object Cross Section Computation Without camera

  10. Parallel Pipeline Processing on a PC cluster system • average computation time for each pipeline stage

  11. 3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping

  12. Deformable Mesh Model • Dynamic 3D Shape reconstruction • Reconstruct 3D shape for each frame • Estimate 3D motion by establishing correspondences between frames t and t+1 • Constraint • Photometric constraint • Silhouette constraint • Smoothness constraint • 3D motion flow constraint • Inertia constraint Intra-frame deformation Inter-frame deformation

  13. Intra-frame deformation • step 1 Convert the voxel representation into a triangle mesh. [1] • step 2 Deform the mesh iteratively: • step 2.1 Compute force acting on each vertex • step 2.2 Move each vertex according to the force. • step 2.3 Terminate if all vertex motions are small enough. Otherwise go back to 2.1 . [1] Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of 1999 International Conference on Image Processing, pages 361–365, Kobe, Japan, Oct. 1999.

  14. Intra-frame deformation • External Force: • satisfy the photometric constraint

  15. Intra-frame deformation • Internal Force: • smoothness constrain • Silhouette Preserving Force: • Silhouette constrain • Overall Vertex Force:

  16. Performance Evaluation

  17. Dynamic Shape Recovery • Inter-frame deformation • A model at time t deforms its shape to satisfy the constraints at time t+1, we can obtain the shape at t+1 and the motion from t to t+1 simultaneously.

  18. Dynamic Shape Recovery • Define 、 、 • Drift Force: • 3D Motion flow constraint • Inertia Force: • Inertia constraint • Overall Vertex Force:

  19. Dynamic Shape Recovery

  20. 3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping

  21. Viewpoint Independent Patch-Based Method • Select the most “appropriate” camera for each patch • For each patch pi • Compute the locally averaged normal vector Vlmnusing normals of pi and its neighboring patches. • For each camera cj , compute viewline vectorVcjdirecting toward the centroid of pi. • Select such camera c* that the angle between VlmnandVcjbecomes maximum. • Extract the texture of pifrom the image captured by camera c*.

  22. Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Parameters • c: camera • p: patch • n: normal vector • I: RGB value

  23. Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • A depth buffer of cj: Bcj • Record patch ID and distance to that patch fromcj

  24. Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Visible Vertex from Camera cj: • The face of pi can be observed from camera cj • Project visible patches onto Bcj • Check the visibility of each vertex using the buffer

  25. Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Compute RGB values of all vertices visible from each camera • Specify the viewpoint eye • For each patch pi, do 4 to 9 • If visible ( ), do 5 to 9 • Compute weight • For each vertex of patch pi, do 7 to 8 • Compute the normalized weight • Compute RGB value • Generate the texture of patch pi by linearly interpolating RGB values of its vertices

  26. Performance • Viewpoint Independent Patch-Based Method (VIPBM) • Viewpoint Dependent Vertex-Based Texture Mapping Algorithm (VDVBM) • VDVBM-1 : including real images captured by camera cj itself • VBVBM-2: excluding real images • Mesh : converted from voxel data • D-Mesh : after deformation

  27. Performance

  28. Performance Frame number

  29. Editing and Visualization System • Methods to generate camera-works • Key Frame Method • Automatic Camera-Work Generation Method • Virtual Scene Setup • Virtual camera • Background • object

  30. Key Frame Method • Specify the parameters (positions, rotations of a virtual camera, object, etc.) for arbitrary key frames

  31. Automatic Camera-Work Generation Method • Object’s parameters (standing human) • Position, Height, Direction • User has only to specify • Framing of a picture • Appearance of the object from the virtual camera • We can compute virtual camera parameters • Distance between virtual camera and object d • Position of the virtual camera (xc, yc, zc)

  32. Automatic Camera-Work Generation Method • Distance between virtual camera and object d • Position of the virtual camera (xc, yc, zc)

  33. Conclusion • A PC cluster system with distributed active cameras for real-time 3D shape reconstruction • Plane-based volume intersection method • Plane-to-Plane Perspective Projection algorithm • Parallel pipeline processing • A dynamic 3D mesh deformation method for obtaining accurate 3D object shape • A texture mapping algorithm for high fidelity visualization • A user friendly 3D video editing system.

  34. Reference • T. Matsuyama and T. Takai. “Generation, visualization, and editing of 3d video.” In Proc. of symposium on 3D Data Processing Visualization and Transmission, pages 234–245, Padova, Italy, June 2002. • T. Matsuyama, X. Wu, T. Takai, and T. Wada. “Real-time dynamic 3d object shape reconstruction and high-fidelity texture mapping for 3d video.” IEEE Trans. on Circuit and Systems for Video Technology, pages 357–369, 2004. • T. Wada, X. Wu, S. Tokai, and T. Matsuyama. “Homography based parallel volume intersection: Toward real-time reconstruction using active camera.” In Proc. of International Workshop on Computer Architectures for Machine Perception, pages 331–339, Padova, Italy, Sept. 2000.

More Related