1 / 37

Multi-view image stitching

Multi-view image stitching. Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California, Merced E : guimei.zh@163.com Phone:209-658-4838 Lab : CAS Eng 820 ( T : 228-4398). June 16, 2014. Monday 4:00-6:00 PM

iferry
Télécharger la présentation

Multi-view image stitching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-view image stitching Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation)LAB School of Engineering, University of California, Merced E: guimei.zh@163.comPhone:209-658-4838 Lab: CAS Eng 820 (T: 228-4398) June 16, 2014. Monday 4:00-6:00 PM Applied Fractional Calculus Workshop Series @ MESA Lab @ UCMerced

  2. Introduction Why to do some work on stitching? • Generate one panoramic image from a series of smaller, overlapping images. • The stitched image can also be of higher resolutionsthan a panoramic image acquired by a panoramic camera. The other is a panoramic camera is more expensive. AFC Workshop Series @ MESALAB @ UCMerced

  3. Introduction

  4. Introduction Applications: interactive panoramic viewing of images,architectural walk-through,multi-node movies and other applications associated with modelling the 3D environment using images acquired from the real world (digital surface models/digital terrain models/ true orthophoto and full 3D models) AFC Workshop Series @ MESALAB @ UCMerced

  5. Introduction true orthophoto and full 3D models AFC Workshop Series @ MESALAB @ UCMerced

  6. Introduction • What is multi-view? capture images at different time, at different view points or using different sensor, such as camera, laser scanner, radar, multispectral camera….. AFC Workshop Series @ MESALAB @ UCMerced

  7. 2. Method The flowchart of producing a panoramic image

  8. Introduction The main work is as following: 1. image acquired 2.How to perform image effective registration 3. How to fulfill image merging AFC Workshop Series @ MESALAB @ UCMerced

  9. Introduction • Image acquisition • Use one camera, at different time or different viewpoints, to capture images, so there is rotation or translation transformation or both of them. (R,T) • Use several cameras located in different viewpoint to capture images. (R,T) • Use different sensors, such as camera, laser scanner, radar, or multispectral scanner ….(fuse multi sensor information) AFC Workshop Series @ MESALAB @ UCMerced

  10. Introduction Geometry of overlapping images Camera and tripod for acquisition by camera rotations Need perform coordination transformation AFC Workshop Series @ MESALAB @ UCMerced

  11. Introduction • Since the orientation of each imaging plane is different in acquisition method. Therefore, images acquired need to be projected onto the same surface, such as the surface of a cylinder or a sphere, before image registration can be performed. • that means we have to perform coordinate transformation. AFC Workshop Series @ MESALAB @ UCMerced

  12. Introduction • Image registration: To form a larger image with a set of overlapping images, it is necessary to find the transformations matrix(usually rigid transformation, only parameters R and T) to align the images. The process of image registration aims to find the transformations matrix to align two or more overlapping images. Because the projection( from the view point through any position in the aligned images into the 3D world) is unique. AFC Workshop Series @ MESALAB @ UCMerced

  13. Introduction 5th frame image 6th frame image Registrated image AFC Workshop Series @ MESALAB @ UCMerced

  14. Multi-view point clouds registration and stitching based on SIFT feature • Motivation • Method • Experiments • Conclusion • Discussion SIFT: scale invariant feature transform AFC Workshop Series @ MESALAB @ UCMerced

  15. 1. Motivation Problems: (Existed method for multi-view point clouds registration in large scene) • Be restricted by the view angle of camera, single or two viewpoint image can only obtain local information of local scene; AFC Workshop Series @ MESALAB @ UCMerced

  16. 1. Motivation • Existed methods need to add special markers in large reconstruction scenes. • Existed method also need ICP(iterative closest point ) iterating calculation, and can’t eliminate interference of holes and invalid 3D point clouds. AFC Workshop Series @ MESALAB @ UCMerced

  17. 2. Method • Our work based on Bendels[8] work, put forward a new algorithm of multi-view registration and stitching. • Generation 2D texture image of effective point clouds; • we extract SIFT features and match them in 2D effective texture image;

  18. 2. Method 3. Then we map the extracted SIFT features and matching relationship into the 3D point clouds data and obtain features and matching relationship of multi-view 3D point clouds; 4. Finally we achieve multi-view point clouds stitching.

  19. 2. Method 2.1 Generation texture image of effective point clouds data Why: 3D point clouds can’t avoid holes and noise, in order to decrease the effect of registration and stitching precision about multi-view point clouds, we use mutual mapping method between 3D point clouds and 2D texture image to obtain texture image of effective point clouds data. AFC Workshop Series @ MESALAB @ UCMerced

  20. 2. Method 2.1 Generation texture image of effective point clouds data • How: Firstly, we projected the 3D point clouds to 2D plane, secondly, used projection binary graph to make 8 neighborhood seed filling and area filling, so that we can obtain projection graph of effective point clouds data. AFC Workshop Series @ MESALAB @ UCMerced

  21. 2. Mehtod 2.1 Generation texture image of effective point clouds data Texture image of effective point clouds data of a scene

  22. 2. Mehtod 2.2 Extraction and matching with SIFT feature • Extract SIFT feature SIFT is a local feature which is proposed by David[7], we can extract the SIFT feature which is invariant under translation, scale and rotation. This paper used SIFT algorithm to extract 2D features, then used RANSAC method[9]to eliminate error matching. AFC Workshop Series @ MESALAB @ UCMerced

  23. 2. Mehtod • 2. 3 3D feature point extraction Pixel point of effective texture image, which is obtained by point clouds texture mapping has one-to-one correspondence relationship to 3D point clouds[10], as shown in Fig. correspondence relationship AFC Workshop Series @ MESALAB @ UCMerced

  24. 2. Method the method is as follows: • We extract SIFT feature points in 2D texture image, then calculated the coordinate of the point. (2) Because 2D feature points and 3D point clouds have one-to-one correspondence relationship, we can calculate coordinate of corresponding feature points of 3D point clouds. AFC Workshop Series @ MESALAB @ UCMerced

  25. 2. Mehtod • 2.4 3D point clouds stitching The Multi-view 3D point clouds stitching is to make coordinate transformation of point clouds in different coordinate systems, the main problem is to estimate the coordinate transformation R (rotation matrix) and T(translation matrix). AFC Workshop Series @ MESALAB @ UCMerced

  26. 2. Method • According to the matching point pairs which are obtained through the above step, we can estimate coordinate transformation relationship, that is to • estimate parameters R and T, which make the objective function get minimum: Where pi and qi are matching points pairs of 3D point clouds in two consecutive viewpoint. AFC Workshop Series @ MESALAB @ UCMerced

  27. 3. Experiments (a) SIFT feature effective texture image 1 (b) effective texture image 2 (c) SIFT feature of 3D cloud points 1 (d) SIFT feature of 3D cloud points 2

  28. 3. Experiments Ref [8] Ref [1] Our method Experimental results

  29. 3. Experiments Performances evaluation criteria • Accuracy: registration rate and stitching error rate • Efficiency: time consume

  30. 3. Experiments

  31. AFC Workshop Series @ MESALAB @ UCMerced

  32. AFC Workshop Series @ MESALAB @ UCMerced

  33. 4. Conclusion • Use SIFT feature of effective texture image to achieve registration and stitching of dense multi-view point clouds, we obtain texture image of effective point clouds through mutual mapping between 3D point clouds and 2D texture image, this algorithm eliminate interference of holes and invalid point clouds;

  34. 4. Conclusion • Ensure that effective 3D feature which corresponding to every 2D feature can be found in the 3D point clouds, it can eliminate the unnecessary error matching, so matching efficiency and matching precision have been improved;

  35. 3. Our algorithm use the correct matching point pairs to stitch, so it can avoid stepwise iterative of ICP algorithm, and decrease computational complexity of matching, it can also reduce stitching error which is brought by error matching.

  36. AFC Workshop Series @ MESALAB @ UCMerced

  37. Thanks

More Related