1 / 1

Reconstruction the 3D world out of two frames, based on camera pinhole model :

Laboratory of Computer Graphics & Multimedia. Frames. Department of Electrical Engineering Technion. (3) Recognition and Differentiation Between Static and Moving Objects. Frame i -N. (1) Feature Detection and Matching. Dynamic Point Reconstruction. (4) Collision Detection.

phuc
Télécharger la présentation

Reconstruction the 3D world out of two frames, based on camera pinhole model :

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Laboratory of Computer Graphics & Multimedia Frames Department of Electrical Engineering Technion (3) Recognition and Differentiation Between Static and Moving Objects Frame i-N (1) Feature Detection and Matching Dynamic Point Reconstruction (4) Collision Detection Collision recognition from a video – part A Estimate dynamic points scattering Frame i Alert Is there collision? 3D Reconstructed points Feature Detection & Image Descriptors Estimating transformation between frames Frame 1 Matches Triangulation Fundamental Matrix Presented by: Adi Vainiger & Eyal Yaacoby , under the supervision of Netanel Ratner Matches Matching Interest Points Feature Detection & Image Descriptors Static Points Static Points Frame 2 שחזור העולם התלת-ממדי על פי הנקודות הסטטיות בלבד Static Points שחזור העולם התלת-ממדי על פי הנקודות הסטטיות בלבד Static Feature Points Reconstruction of the Dynamic points N-1 (2) 3D Reconstruction N Synthetic Testing Environment Static Points Static Points Static Points Low Variance Dynamic Feature Points High Variance N-1 N שחזור העולם התלת-ממדי על פי הנקודות הסטטיות בלבד שחזור העולם התלת-ממדי על פי הנקודות הסטטיות בלבד Estimating Fundamental Matrix by the Static points Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating the Essential Matrix using the calibration information of the camera. Extracting the Transformation between the frames out of the Essential Matrix 3. Calculation of first-order triangulation Project goal: 3D Synthetic World Designing an algorithm for recognition of possible collision trajectories by vehicles, using a video taken from a camera directed toward the rear of the direction of driving System outline: Introduction: Driving is a task that requires attention distribution. One of its many issues is identifying possible collision trajectories by vehicles from behind. Thus, there is a need for a system that automatically recognizes vehicles that are about to collide with the user, and warns him/her. Our solution is an algorithm that uses a video feed from a single simple camera , recognizes moving vehicles in the video and predicts whether they are about to collide with the user. Part A of this project focuses on the algorithm itself, without taking into account real-time constraints. Simulation Scenarios: (4) Collision Detection Collision direction Same direction (1) Feature Detection and Matching (1) Feature Detection and Matching (2) 3D Reconstruction Dynamic Feature Points (2) 3D Reconstruction (2) 3D Reconstruction Reconstructions Matching 3D Reconstructed points 3D Reconstructed points Variance Calculation for each point 3D Reconstructed points Static Feature Points Collision detection Results: The system takes a video from a camera, with an angle to the direction of the movement. For each window of time (~2.5 seconds) in the video, the system looks at pairs of frames a second apart. Each such pair of frames is processed at stages 1 and 2. After there are enough reconstructions the algorithm performs stages 3 and 4. Reconstruction Results: N-1 Collision direction Same direction (3) Recognition and Differentiation Between Static and Moving Objects *Scenario 4 is a collision scenario and the rest are non-collision scenarios.Ideal results for synthetic environment : 2% false negatives 12% false positives Matching the reconstructions for each point. Differentiation of moving points from static points is based on the normalized variance of the reconstructed matches of each point. Real movie results: 3D reconstruction of the world example Static Point Reconstruction SIFT vs. ASIFT Static & moving object differentiation We normalize the variance by angle and distance from camera, as the ambiguity correlates well with them High ambiguity L0w ambiguity *Red points – high variance --> dynamic points. Green points – low variance --> static points Conclusions : (1) Feature Detection & Matching On the synthetic environment, the system produces good results. When turning to real movies, we had several issues: Matching features of dynamic objects (due to rolling shutter) did not work, and the classification did not work well. However, under certain conditions, we still get valuable results. • In this section we find interest points and their descriptors then match them between the two frames. This stage was implemented using the algorithm ASIFT. • We estimate whether the dynamic points are moving towards the camera, using their scattering thorough out the reconstructions . • On a collision course, the lines between the camera centers and the object are almost parallel. • Thus, the reconstructions will be very distant from one another, as shown in results Further research should allow much better results. We believe that a tracking algorithm can solve most of the issues that we saw. • Though slower (~50x) then SIFT, ASIFT was chosen due to accuracy reasons and fining more features. Our thanks to HovavGazit and CGM Lab for the support

More Related