140 likes | 254 Vues
This project focuses on implementing a sensor model based on perspective projection to enhance object detection in imagery. By using a transformation matrix (P), camera calibration matrix (K), and rotation matrix (R), the technique aims to better understand the spatial relationship between objects and their pixel coordinates in images. Results include variations in detected street lights and trash cans before and after applying GIS methods, with both perspective projection and fusion processes. The next steps involve testing and debugging the sensor model to improve accuracy and handling of pixel values.
E N D
Week 9:Web-Assisted Object Detection Alejandro Torroella & Amir R. zamir
Sensor Model: Perspective Projection • Implemented a sensor model based on perspective projection (objects that are far away appear smaller in the image as compared to objects that are closer) • Where: • P = pixel transformation matrix • K = camera calibration matrix • R = rotation matrix • C = world coordinate of the camera • Xw = world coordinate of the object • Xi = pixel coordinate of the object
Geometry Method results: Before Street Lights Trash Cans
Geometry Method results: After GIS Sift(without perspective projection) Street Lights Trash Cans
Geometry Method results: After GIS Sift(with perspective projection) Street Lights Trash Cans
Geometry Method results: after Fusion (without perspective projection) Street Lights Trash Cans
Geometry Method results: after Fusion (with perspective projection) Street Lights Trash Cans
Geometry Method results: Before Traffic Signals Street Lights
Geometry Method results: After GIS Sift(without perspective projection) Traffic Signals Street Lights
Geometry Method results: After GIS Sift(with perspective projection) Traffic Signals Street Lights
Geometry Method results: after Fusion (without perspective projection) Traffic Signals Street Lights
Geometry Method results: after Fusion (with perspective projection) Traffic Signals Street Lights
Goals for next week • Test the GIS fusion with perspective projection thoroughly • Possibly fix bugs with the implementation of the sensor model • Principal point of the image is unknown, need some way to find what it is. • Some pixel values result in out of the bounds of the image • Not sure why, but it might be due to the rough estimation of the principal point and/or error in the conversion from geodetic to ECEF.