1 / 28

ICRA 2014 June 2nd, 2014 Hyunggi Cho, Young-Woo Seo , B.V.K. Vijaya Kumar, and Raj Rajkumar

A Multi-Sensor Fusion System for Moving Object Detection and Tracking in Urban Driving Environments. ICRA 2014 June 2nd, 2014 Hyunggi Cho, Young-Woo Seo , B.V.K. Vijaya Kumar, and Raj Rajkumar. Contents. Introduction - “Self-Driving Vehicles”

hawa
Télécharger la présentation

ICRA 2014 June 2nd, 2014 Hyunggi Cho, Young-Woo Seo , B.V.K. Vijaya Kumar, and Raj Rajkumar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Multi-Sensor Fusion System for Moving Object Detection and Tracking in Urban Driving Environments ICRA 2014 June 2nd, 2014 Hyunggi Cho, Young-Woo Seo, B.V.K. Vijaya Kumar, and Raj Rajkumar

  2. Contents Introduction - “Self-Driving Vehicles” Perception System for Self-Driving Vehicles Vision-based Object Detection Multi-Sensor Fusion for Object Tracking Results & Future Work

  3. Introduction – “Self-Driving Vehicles” Technology Effect on society 125 years Karl Benz’s first Motorwagen in 1885 [1] One Billion World Vehicle Population in 2010 [2] [1] Wikipedia ‘Automobile’ page, http://en.wikipedia.org/wiki/Automobile [2] John Sousanis, "World Vehicle Population Tops 1 Billion Units". Wards Auto. Retrieved 17 July 2012

  4. Brief History of Autonomous Driving 2010- Google Car and CMU SRX : Real urban driving 2007 – DARPA Urban Challenge : Simulated urban driving 2004, 2006 - DARPA Grand Challenge : High-speed off-road driving 1995 – CMU NAVLAB “No Hands Across America” :Highway driving 1987~1995 - EUREKA Prometheus Project: Highway driving

  5. Perception System for Self-Driving Vehicles • Road Structure • Moving Objects • Static Map • Traffic Context - Kinematic info.: position, velocity, acceleration, prediction - Geometric info.: size, shape - Semantic info.: object class, behavior, intention, etc

  6. Automotive Sensors LIDAR Camera RADAR Amount of information LIDAR Cost

  7. Autonomous Cadillac SRX4 GPS Lidar Camera Radar2 Lidar Radar1 Towards a Viable Autonomous Driving Research Platform, Junqing Wei, Jarrod Snider, Junsung Kim ,John Dollan, and Raj Rajkumar, IEEE IV, 2013 Radar2 IR Camera Lidar

  8. Sensor Coverage LIDAR ~120m range Radar2 ~174m range Car orientation Radar1 ~250m range

  9. Sensor Coverage Visualization of LIDAR measurements

  10. Goal • To track multiple moving objects reliably using radar, LIDAR, and camera Raw measurements from radar Raw measurements from LIDAR Raw measurements from camera

  11. Challenges for Moving Object Tracking • Complexity of Urban and Highway Traffic Environments Urban Environments Highway Environments

  12. Challenges for Moving Object Tracking • Limited Resolution of Sensors *Velodyne HDL-64E Six multi-planar LIDARs *Video from MIT Urban Challenge team’s website

  13. Main Contributions • Developed a multi-sensor fusion system using the Kalman filter based on Boss’ tracking system* Fusion Layer Multi-Sensor Fusion withKalman filter Model Selection Data Association Object Management • Introduce a vision sensor to improve tracking performance as well as scene understanding level Sensor Layer Local Classification & Proposal Generation Local Feature Validation & Association LIDAR CAMERA RADAR vision target edge target point target feature feature raw raw scan images RADAR Reader Camera Reader LIDAR Reader *Michael S. Darms et. al., Obstacle Detection and Tracking for the Urban Challenge, IEEE Transaction on ITS, 2009

  14. Deformable Part Model with HOG Features • Deformable Part Model (DPM)* • 1 root filter + 6 part filters + deformation models • Root filter : Global shape pattern of an object • Part filters : Detailed shape pattern for each part of the object • Deformation model for high variability of the object • Example Visualization of DPM (e.g., vehicle rearview model) 1 Root filter 6 Part filters Deformation model *P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, IEEE Transactions on PAMI, 2010

  15. Vision-Based Object Detection: Pedestrian • Pedestrian detection on Caltech set: Set04_Video09 Real-time Pedestrian Detection with Deformable Part Models, H. Cho, P. Rybski, A. Bar-Hillel, and W. Zhang, IEEE IV, 2012 14

  16. Vision-Based Object Detection: Vehicle (14,453) • Truck, Bus Rearview Model (56x40) • SUV, Sedan Rearview Model (48x48) e) • Small Car Rearview Model (16x16) 15

  17. Vision-Based Object Detection: Vehicle (14,453) • Vehicle Detection on Highway : Small model : Normal model Real-time Pedestrian Detection with Deformable Part Models, H. Cho, P. Rybski, A. Bar-Hillel, and W. Zhang, IEEE IV, 2012

  18. Tracking Process SensorFusion Data Association Features from camera Features from Radar Features from Lidar

  19. Sensor Characterization Sensor Measurements Pixel size of car in image No. of Pts in LIDAR Distance 20m 50 58 x 58 • Radar • LIDAR 40m 20 32 x 32 • Camera 60m 8 20 x 20 80m 4 14 x 14

  20. Sensor Fusion in Practice 2D Box Model Fusion Layer Measurements ( Observation, Proposals, Movement Observations) Validated Features Features Sensor Layer LIDAR : Camera : RADAR :

  21. Data Association in Practice • Global Nearest Neighbor (GNN) Algorithm b) LIDAR c) Radar a) Camera : Projected point : Detected point : Predicted edge : Extracted edge : Predicted point : Detected point time t+1 time t+1 time t+1 (3) (1) (2) (5) (4) time t time t time t (6) Edge target

  22. Experiments • CMU-to-Airport dataset • 25 minutes driving from the CMU campus to the Pittsburgh International Airport • Contain some urban scenarios, but mostly highway driving scenarios • Collected all sensor data from radar, LIDAR, and camera as well as vehicle state • Evaluation • Manually investigated tracking performance for the entire route • Scenario I : Vehicle tracking in a highway environment • Scenario II : Vehicle tracking in an urban environment 21

  23. Tracking Results: Vehicle Tracking : 3D Box model : 2D Box model : Point model • Scenario I : Highway Environments

  24. Tracking Results: Vehicle Tracking : 3D Box model : 2D Box model : Point model • Scenario II : Urban Environment

  25. Quantitative Evaluation • Quantitative Results • Remaining Issues from the CMU-to-Airport dataset • False tracking from road boundary structures (e.g., guard rail, K-rail, fence, etc.) • Variable-box size model for truck and bus tracking (e.g., erroneous feature extraction) • Slow-moving object (e.g., pedestrians) tracking issue 24

  26. Tracking Results: Issues • Mirroring Target in Tunnel

  27. Toward Holistic Scene Understanding • Goal : Improve tracking performance by utilizing traffic contextual cues • Sidewalk vs. Pedestrians • Traffic Light vs. Vehicles

  28. Thank You! • CMUSRX Team 27

More Related