1 / 20

Color Learning and Illumination Adaptation on Robots

Color Learning and Illumination Adaptation on Robots. Mohan Sridharan Texas Tech University mohan.sridharan@ttu . edu. Outline. Standard color segmentation. Color Learning. Illumination adaptation. Videos. Robot Vision – Flowchart. Supervised Learning Approach.

nancy
Télécharger la présentation

Color Learning and Illumination Adaptation on Robots

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Color Learning and Illumination Adaptation on Robots Mohan Sridharan Texas Tech University mohan.sridharan@ttu.edu CS5331: Autonomous Mobile Robots

  2. Outline • Standard color segmentation. • Color Learning. • Illumination adaptation. • Videos. CS5331: Autonomous Mobile Robots

  3. Robot Vision – Flowchart CS5331: Autonomous Mobile Robots

  4. Supervised Learning Approach • Assign color labels to 256*256*256 possible combinations: Color Map. • Hand-label discrete colors. • Locally Weighted average – Color map generalization. CS5331: Autonomous Mobile Robots

  5. Some Challenges… • Systems needs to be re-calibrated: • Illumination changes. • Natural light variations: day/night. • Trained for one illumination (a, b), tested for another (c, d). (a) (b) (c) (d) • Re-calibration very time consuming. • More than an hour spent each time… • Cannot achieve autonomous operation. CS5331: Autonomous Mobile Robots

  6. Layered Color Precision • Detect useful patterns along scan-lines. • Maintain three color maps: • Layer 1: Rough classification of green and white. • Layer 2: Colors classified in relation to green, overlap ok. • Layer 3: Complete look-up table created, clear boundaries. • Increasing levels of precision, larger number of colors modeled. CS5331: Autonomous Mobile Robots

  7. Layered Color Precision • Update distribution of “field” (green) when illumination changes. • Adjust other distributions based on change in distribution of green. • Limitations: • Colors represented as cuboids. • Different color distributions react differently to illumination changes. • Real-time performance but performance not as good as hand-labeled map. CS5331: Autonomous Mobile Robots

  8. Bayesian Color Estimation • Hierarchical Bayesian color model: • Gaussian priors. • Joint posterior on position and environmental illumination. • Image mean color represents current illumination. • The posterior over illuminations modeled as a Gaussian: CS5331: Autonomous Mobile Robots

  9. Bayesian Color Estimation • Joint posterior decomposed elegantly: • Rao-Blackwellised Particle Filter (RBPF): • Particle filtering (samples) for robot pose estimation. • Kalman filtering (Gaussians) for illumination estimation given a robot pose. CS5331: Autonomous Mobile Robots

  10. RBPF • Posterior represented as set of weighted particles. • Motion update: new pose based on robot motion. • Observation update: likelihood or particle given an observation. • Kalman filter update: • Update using mean image vector at . • Re-sampling: particles replicated based on probabilities. CS5331: Autonomous Mobile Robots

  11. Bayesian Color Estimation • Elegant RBPF decomposition. • Limitations: • Requires prior knowledge to generate Gaussian parameters and a priori probabilities. • Applied to limited illumination conditions (two in paper!). • Does not exploit domain knowledge for autonomous operation. • Not real-time operation. • Figure 5? • Kalman filters? Particle filters? • More details available in Probabilistic Robotics. CS5331: Autonomous Mobile Robots

  12. Planned Color Learning • Disjunctive Color model: 3D Gaussian or 3D Histogram. • Model selected image pixels (1x3 vectors). • Gaussian: • Low storage, easy generalization. • Not suitable for multi-modal color distributions. • Histogram: • Higher storage. • Suitable for multi-modal distributions. • Other models (ex: Mixture of Gaussians) feasible. • Small set of complementarymodels with good balance of storage and computation. • Robot selects suitable model. • Goodness-of-fit Bootstrap test with KL-divergence distance measure. CS5331: Autonomous Mobile Robots

  13. Planned Color Learning • Determine sequence of poses to learn colors. • Limited field-of-view: have to move to learn colors. • Goal:Maximize color learning opportunities and minimize localization error. • Reduce localization errors – smaller motion. • Increase color learning opportunities – larger targets. CS5331: Autonomous Mobile Robots

  14. Planned Color Learning • Learn Motion Error Model (MEM). • Error for a desired motion, given certain color knowledge. • Learn Feasibility Model (FM). • Probability of learning each color at each pose, given certain color knowledge. • Find path with highest probability of success. • Maximize color learning while minimizing localization errors. CS5331: Autonomous Mobile Robots

  15. Illumination Representation • Color Map. • Distributions in color space. • Distribution of distances between color space distributions. • Jensen-Shannon measure. CS5331: Autonomous Mobile Robots

  16. Minor Illumination Changes • Adaptation: • Combine existing and learned distributions – merged estimate. • Gaussians: Kalman filter observation update. • Histograms: weighted averaging. CS5331: Autonomous Mobile Robots

  17. Major Illumination Changes • Periodically generate test image distribution. • Compare with the learned distributions at illuminations that have been modeled. CS5331: Autonomous Mobile Robots

  18. Autonomous Color Learning – Approach • Prior: world map. • Plan motion sequence and learn color models. • Learn illumination representation. • Iteratively: • Determine if there is a minor illumination change. If yes, update color map selectively. • If major change to a known illumination, transition to corresponding color and illumination model. • If major change to new illumination, re-learn color map, illumination model autonomously. • If no change in illumination, continue as before. CS5331: Autonomous Mobile Robots

  19. Planned Color Learning – Questions? • Prior knowledge? • Learned models? • Shadows, highlights? • Why Jensen-Shannon measure? • Model parameters for planned learning? CS5331: Autonomous Mobile Robots

  20. That’s all folks  CS5331: Autonomous Mobile Robots

More Related