1 / 19

CmpE-537 Computer Vision

Term Project Color and Illumination Independent Landmark Detection for Robot Soccer Domain By Tekin Meriçli Artificial Intelligence Laboratory Department of Computer Engineering Boğaziçi University 27/12/2007. CmpE-537 Computer Vision. Outline. Introduction Related Work

Télécharger la présentation

CmpE-537 Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Term Project Color and Illumination Independent Landmark Detection for Robot Soccer Domain By Tekin Meriçli Artificial Intelligence Laboratory Department of Computer Engineering Boğaziçi University 27/12/2007 CmpE-537 Computer Vision

  2. Outline • Introduction • Related Work • Proposed Approach • Experimental Setup • Results • Conclusion • References

  3. Introduction • Three fundamental questions of mobile robotics • “Where am I?”, • “Where am I going?”, • “How can I get there?” • The aim of this project is to answer the first question for robot soccer domain • Specifically RoboCup Standard Platform League (former 4-Legged League)‏ • Robots with vision sensors (i.e. cameras) are used

  4. Introduction

  5. Introduction • All important objects on the field, that is the ball, the beacons, and the goals, are color-coded • This makes vision, and hence localization modules highly dependent on illimunation • The robots may not be able to detect the beacons at all, or calculate their distances and orientations to the beacons wrong if there is even a small change in the illumination level • Main motivation is to make the vision / localization processes color and illumination independent in the Standard Platform League domain

  6. Related Work • Color / illumination dependent approach • Color segmentation / pixel classification on the image • Connected component analysis to build regions • Sanity checks to remove noise and illogical perceptions • aspect ratio, minimum area, etc. • Most of the RoboCup teams use this approach [1–4]

  7. Related Work • Feature detection / recognition based approach • Used for simultaneous localization and mapping (SLAM) purposes • Scale-invariant feature transform (or SIFT) can be used in algorithms for tasks like matchin different views of an object or scene (e.g. for stereo vision) and object recognition [7] • SURF, which stands for Speeded-Up Robust Features, approximates SIFT

  8. Proposed Approach • Image labeling process that has been used in color segmentation-based approach is replaced with region labeling in which the landmarks and their immediate surrounding are covered • The robot is placed at a location where it can see the landmark, and then a region is selected around the landmark to specify the region in which the robot should find the SURF features and associate them with that particular landmark

  9. Proposed Approach

  10. Proposed Approach • This process is repeated for all landmarks on the soccer field from different angles and distances • Supervised learning is used to learn the associations between the feature descriptors and the landmarks • The distance values for landmarks are calculated using the inter-feature distances

  11. Experimental Setup • A real Aibo ERS-7 robot is placed on the field facing a particular landmark with different angles and distances to take pictures • An offline visualizer tool is implemented to show the SURF points on the image and run tests on various images

  12. Experimental Setup

  13. Experimental Setup • SURF points are shown as little circles • Details of the descriptors are listed on the text area • Similar feature points are observed on different images even though the distance and angle values are different • Similarity is defined as the distance between feature points in 64 dimensional feature space

  14. Experimental Setup • First step is to process the training images and define the landmark regions by clicking on the image • The next step is to test images to check whether the landmark in the image is recognized and whether the distance and angle estimates are correct

  15. Results • SURF computation took an average of 56ms on 354x290 images • Aibo robots capture 208x160 images, but have a slower processor; hence, SURF computation takes 59ms on average, which is approximately 17fps • Landmark recognition performance was better than distance estimates • Due to the cylindrical shape of landmarks, some feature points may be closer to or farther from each other depending on the angle, or may totally be hidden • Doing the computations on groups of feature points rather than individuals may improve the performance

  16. Conclusion • A feature-based landmark detection approach is explored • Runs with reasonable fps rate • Main contribution is that this approach provides color (and illumination to some extent) independence in vision and localization processes in robot soccer domain • It has not been tried by any of the RoboCup teams so far • Trying different SURF parameters and running experiments on physical robots are left as future work

  17. References • [1] H. L. Akın et.al. “Cerberus 2006 Team Report”. 2006. • [2] Kaplan, K., B. Celik, T. Mericli, C. Mericli, and H. L. Akın. “Practical Extensions to Vision-Based Monte Carlo Localization Methods for Robot Soccer Domain”, In RoboCup International Symposium 2005, Osaka, July 18-19, 2005. • [3] Peter Stone, Peggy Fidelman, Nate Kohl, Gregory Kuhlmann, Tekin Mericli, Mohan Sridharan, and Shao-en Yu. “The UT Austin Villa 2006 RoboCup Four-Legged Team”. Technical Report UT-AI-TR-06-337, The University of Texas at Austin, Department of Computer Sciences, AI Laboratory, 2006. • [4] M. J. Quinlan et.al. “The 2006 NUbots Team Report”, 2007. • [5] Thomas Roefer et.al. “GermanTeam2006”, 2006. • [6] Herbert Bay, Tinne Tuytelaars, Luc J. Van Gool. “SURF: Speeded Up Robust Features”, In ECCV’06, pp.404-417, 2006. • [7] Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints”, In International Journal of Computer Vision, 60, 2, pp. 91-110, 2004.

  18. References • [8] M. Ballesta, A. Gil, O. Martnez Mozos, and O. Reinoso. “Local descriptors for visual slam”. In Proc. of the Workshop on Robotics and Mathematics, Coimbra, Portugal, 2007. • [9] Barfoot, T D, “Online Visual Motion Estimation using FastSLAM with SIFT Features”. In Proc. of the Int. Conf. on Robotics and Intelligent Systems (IROS), Edmonton, Alberta, August 2-6, 2005. • [10] Pantelis Elinas and James J. Little. “Stereo vision SLAM: Near real-time learning of 3D point-landmark and 2D occupancy-grid maps using particle lters”. In IROS07, 2007. • [11] J. Little, S. Se, and D.G. Lowe. Vision-based mobile robot localization and mapping using scale-invariant features. In IEEE Int. Conf. on Robotics & Automation, 2001. • [12] Mart´nez Mozos, O. and Gil, A. and Ballesta, M. and Reinoso, O. “Interest Point Detectors for Visual SLAM”. In Lecture Notes in Artificial Intelligence, vol4788, 2007.

  19. ?

More Related