1 / 28

Status as of Last Review

Status as of Last Review. Development Channelizer localization Orientation dependent color model Evaluation of wide-FOV camera (product line camera). Progress Since Last Review. Development Shape filtering Kernel-based sign tracking Preliminary results of curb detection. Curbs.

ssuggs
Télécharger la présentation

Status as of Last Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status as of Last Review • Development • Channelizer localization • Orientation dependent colormodel • Evaluation of wide-FOV camera(product line camera)

  2. Progress Since Last Review • Development • Shape filtering • Kernel-based sign tracking • Preliminary results of curb detection

  3. Curbs

  4. Damaged by Curbs

  5. Objectives • Develop reliable methods of detecting, localizing, and classifying sufficient set of indicative features associated with curbs using in-vehicle vision sensor with backward looking view • Learn to recognize the curbs to know • Understand an appropriate parking spot as defined by the curbs when reverse or parallel parking • Identify the boundary of a road way in urban driving

  6. Approach • Extend prior methods (multi-feature classification) • Use visual methods of detection (color, texture, shape) • Learn classification from large training set • Utilize camera calibration information to understand 3D geometry and change view points • Parallelize for multiple features

  7. Approach • Extend prior methods (multi-feature classification) • Use visual methods of detection (color, texture, shape) • Learn classification from large training set • Utilize camera calibration information to understand 3D geometry and change view points • Parallelize for multiple features

  8. Framework of Highway Workzone Recognition INPUT: A sequence of highway images Detection : Localize relevant signs in each image Tracking : Localize the detected sign in remained images before it disappears Classification : Identify the types of detected signs : Infer the current driving region based on the results of sign classification so far Inference OUTPUT: What is the road condition now? “Normal-highway” or ”Work-zone”

  9. Detection Detection Tracking Tracking Classification Classification Log-polar Transform Sign Classifier Detection Classification Input Image at t Detection Classification Pixel-wise Classification Connected Component Grouping Non-maximum Suppression

  10. Detection Tracking Tracking Classification • The sign image sub-regions from two consecutive image frames overlap each other. • Small variations of their appearances and locations.

  11. Detection Tracking Tracking Classification Log-polar Transform Sign Classifier Detection Classification Input Image at t Pixel-wise Classification Connected Component Grouping Non-maximum Suppression Tracking Tracking Target PDF for t+1 Input Image at t+1 Sub-region Sampling Candidate PDF Candidate Localization Choose Highest Score Target PDF for t+2

  12. Performance of sign detection per frame Performance of sign classification • The first row represents precision / recall of ‘detection and tracking’ • The second row represents precision /recall of ‘detection only’

  13. Workzone Recognition • Created framework for detection and tracking based on color and shape • Achieved detection precision of 98.2% (with recall 88.5%) • Trained classifiers from DOT uniform signage code and from real road imagery • Achieved classification precision for signs of 96.5% (with recall 95.7%) • Demonstrated on-road, real-time recognition of work zones • Given high precision and recall, ‘Highway Workzone Recognition’ is ready for tech transfer.

  14. Framework of Highway Work Zone Recognition Curb Detection Highway Workzone Recognition INPUT: A sequence of roadway images Detection : Localize relevant curbs in each image Tracking : Localize the detected curbs in remained images Classification : Identify the types of detected curbs : Infer the detected curb based on the results of classification Inference OUTPUT: Where is the curb? What does this curb mean?

  15. Approach • Extend prior methods (multi-feature classification) • Use visual methods of detection (color, texture, shape) • Learn classification from large training set • Utilize camera calibration information to understand 3D geometry and change view points • Parallelize for multiple features

  16. Concept • Multiple cameras (rear, side, forward) • Recognize important cues of curbs • Estimate relative position from vehicle • (Potential) Understand appropriate parking spots as defined by the curbs to assist self-parking system

  17. Potential Features 3D Structure Color Segmentation Texture Classification Edge Detection

  18. Structure from Motion

  19. Color Segmentation

  20. Edge Detection Distorted Undistorted Edge Bird’s-eye view Edge

  21. Texture Classification

  22. Complex Features • Ground plane estimation • To exploit 3D geometry • Edge continuity • To interpret curvy curbs • Curb scale and geometry • To utilize the height of curb above the ground plane

  23. Color Classification Yellow Curb Zones Blue Curb Zones Normal Curb Zones White Curb Zones Green Curb Zones Red Curb Zones * Images from LADOT

  24. Development Plan • Develop and test simple features • Train classifiers to detect and localize curbs • Evaluate classifier performance • Add complex features • Test quantify detection and localization performance • Train color classifiers to interpret appropriate parking spots

  25. Questions or Comments? Acknowledgements This project is sponsored by GM-CMU AD-CRL.

More Related