1 / 47

Real-Time Vision on a Mobile Robot Platform

Real-Time Vision on a Mobile Robot Platform. Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin smohan@ece . utexas . edu. Motivation. Computer vision challenging . “State-of-the-art” approaches not applicable to real systems.

cheche
Télécharger la présentation

Real-Time Vision on a Mobile Robot Platform

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin smohan@ece.utexas.edu

  2. Motivation • Computer vision challenging. • “State-of-the-art” approaches not applicable to real systems. • Computational and/or memoryconstraints. • Focus: efficient algorithms that work in real-time on mobile robots.

  3. Overview • Complete vision system developed on a mobile robot. • Challenges to address: • Color Segmentation. • Object recognition. • Line detection. • Illumination invariance. • On-board processing– computational and memory constraints.

  4. Test Platform – Sony ERS7 • 20 degrees of freedom. • Primary sensor – CMOS camera. • IR, touch sensors, accelerometers. • Wireless LAN. • Soccer on 4.5x3m field – play humans by 2050!

  5. The Aibo Vision System – I/O • Input: Image pixels in YCbCr Color space. • Frame rate: 30 fps. • Resolution: 208 x 160. • Output: Distances and angles to objects. • Constraints: • On-board processing: 576 MHz. • Rapidly varying camera positions.

  6. Robot’s view of the world…

  7. Vision System – Flowchart…

  8. Vision System – Phase 1: Segmentation. • ColorSegmentation: • Hand-label discrete colors. • Intermediate color maps. • NNr weighted average – Master color cube. • 128x128x128 color map – 2MB.

  9. Vision System – Phase 1: Segmentation. • Use perceptually motivated color space – LAB . • Offline training in LAB – generate equivalent YCbCr cube.

  10. Vision System – Phase 1: Segmentation.

  11. Vision System – Phase 1: Segmentation. • Use perceptually motivated color space – LAB. • Offline training in LAB – generateequivalent YCbCr cube. • Reduce problem to table lookup. • Robust performance with shadows,highlights. • YCbCr – 82%, LAB – 91%.

  12. Sample Images – Color Segmentation.

  13. Sample Video – Color Segmentation.

  14. Some Problems… • Sensitive to illumination. • Frequent re-training. • Robot needs to detect and adapt to change. • Off-board color labeling – time consuming. • Autonomous color learning possible…

  15. Vision System – Phase 2: Blobs. • Run-Length encoding. • Starting point, length in pixels. • Region Merging. • Combine run-lengths of same color. • Maintain properties: pixels, runs. • Bounding boxes. • Abstract representation – four corners. • Maintains properties for further analysis.

  16. Sample Images – Blob Detection.

  17. Vision System – Phase 2: Objects. • Object Recognition. • Heuristics on size, shape and color. • Previously stored bounding box properties. • Domain knowledge. • Remove spurious blobs. • Distances and angles: known geometry.

  18. Sample Images – Objects.

  19. Vision System – Phase 3: Lines. • Popular approaches: Hough transform, Convolution kernels – computationally expensive. • Domain knowledge. • Scan lines – green-white transitions – candidate edge pixels.

  20. Vision System – Phase 3: Lines. • Incremental least square fit for lines. • Efficient and easy to implement. • Reasonably robust to noise. • Lines provide orientation information. • Line Intersections can be used as markers. • Inputs to localization. • Ambiguity removed through prior position knowledge.

  21. Sample Images – Objects + Lines.

  22. Some Problems… • Systems needs to be re-calibrated: • Illumination changes. • Natural light variations: day/night. • Re-calibration very time consuming. • More than an hour spent each time… • Cannot achieve overall goal – play humans. • That is not happening anytime soon, but still…

  23. Illumination Sensitivity – Samples. • Trained under one illumination: • Under different illumination:

  24. Illumination Sensitivity – Movie…

  25. Illumination Invariance - Approach. • Three discrete illuminations – bright, intermediate, dark. • Training: • Performed offline. • Color map for each illumination. • Normalized RGB (rgb – use only rg) sample distributions for each illumination.

  26. Illumination Invariance – Training. • Illumination: bright – color map

  27. Illumination Invariance – Training. • Illumination: bright – map and distributions.

  28. Illumination Invariance – Training.

  29. Illumination Invariance – Testing.

  30. Illumination Invariance – Testing.

  31. Illumination Invariance – Testing.

  32. Illumination Invariance – Testing. 2

  33. Illumination Invariance – Testing. • Testing - KLDivergence as a distance measure: • Robust to artifacts. • Performed on-board the robot, about once a second. • Parameter estimation described in the paper. • Works for conditions not trained for… • Paper has numerical results.

  34. Adapting to Illumination changes – Video

  35. Some Related Work… • CMU vision system: Basic implementation. • James Bruce et al., IROS 2000 • German Team vision system: Scan Lines. • Rofer et al., RoboCup 2003 • Mean-shift: Color Segmentation. • Comaniciu and Peer: PAMI 2002

  36. Conclusions • A complete real-time vision system – on board processing. • Implemented new/modified version of vision algorithms. • Good performance on challenging problems: segmentation, object recognitionand illumination invariance.

  37. Future Work… • Autonomous color learning. • AAAI-05 paper available online. • Working in more general environments, outside the lab. • Automatic detection of and adaptation to illumination changes. • Still a long way to go to play humans .

  38. Autonomous Color Learning – Video • More videos online • www.cs.utexas.edu/~AustinVilla/

  39. THAT’S ALL FOLKS  www.cs.utexas.edu/~AustinVilla/

  40. Question – 1: So, what is new?? • Robust color space for segmentation. • Domain-specific object recognition + line detection. • Towards illumination invariance. • Complete vision system – closed loop. • Accept – cannot compare with other teams, but overall performance good at competitions…

  41. Vision – 1: Why LAB?? • Robust color space for segmentation. • Perceptually motivated. • Tackles minor changes – shadows, highlights. • Used in robot rescue…

  42. Vision – 2: Edge pixels + Least Squares?? • Conventional approaches time consuming. • Scan lines faster: • Reduces colors needing bounding boxes. • LS easier to implement – fast too. • Accept – have not compared with any other method…

  43. Vision – 3: Normalized RGB ?? • YCbCr separates luminance – but not good for practice on Aibo. • Normalized RGB (rgb): • Reduces number of dimensions - storage. • More robust to minor variations. • Accept – have compared with YCbCr alone – LAB works but more storage and calculations…

  44. Illumination Invariance – Training.

  45. Illumination Invariance – Testing.

More Related