1 / 23

Particle Filter Localization Using Invariant Features for Visual Self-Localization

This paper proposes a particle filter-based self-localization method using invariant features for visual information. The method uses SIFT as the main technique within the particle filter to solve the task of topological localization of a mobile robot using only visual information. The method allows for uncertainty representation and can handle changing illumination conditions and detecting additional rooms not present in the training sequences. The algorithm is efficient and incorporates additional image processing techniques for improved accuracy.

Télécharger la présentation

Particle Filter Localization Using Invariant Features for Visual Self-Localization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A particle-filter-based self-localization method using invariant features as visual information Jesús Martínez Gómez jesus_martinez@dsi.uclm.es Ismael García-Varea Alejandro Jiménez-Picazo University of Castilla-La Mancha Spain

  2. Introduction • The task • Topological localization of a mobile robot using only visual information. • Resources • Images acquired with a perspective camera mounted on a robot platform • The challenge • Provide information about the location of the robot for each test image (separately and sequentially) • Each image should be labelled with the room where the image was acquired

  3. Introduction • Our proposal • Use the principles of particle filters to develop a localization method capable to solve the proposed task • Select SIFT as the main technique for the visual step within the particle filter method • Main problems and drawbacks • Changing illumination conditions for the test and training sequences • Execution time necessary to perform SIFT techniques • How to detect additional rooms that were not imaged previously (not available at the training sequences)

  4. Introduction • The environment • A controlled scenario with 5 well defined rooms • All training frames are labelled with the complete pose <x,y,θ> and the room

  5. SIFT • Scale-Invariant Feature Transform • Computer vision algorithm, developed to extract key features in images • Different transformations are applied and the algorithm studies the points which are invariant under these transformations • Features extracted are invariant to image scale and rotation • Matching the features extracted we can estimate the similarity between two images

  6. Monte Carlo localization method • Several particles are spread over the environment • Each one of these particles represents a robot’s pose • Iterative process with two phases • Prediction • The movement model is applied to all the particles • Update • Each particle is weighted using the likelihood of obtaining the measurement from the pose it represents • Main advantages • The method allows to represent uncertainty about robot’s pose • The number of particles allows to control the algorithm complexity

  7. SIMD Approach • Particle representation • Each particle represents a robot’s pose: 3 parameters < x , y , θ> • All these parameters have continuous values. The limits for x and y values are the boundary of the environment • Training process • Training frames were offline processed to extract their SIFT points • We store the pose and the SIFT points extracted from all the available training images • These points will be matched with those extracted from the test frames

  8. SIMD Approach • Prediction Phase • Odometry information can not be obtained • The expected robot movements can be extracted from the training frames • We assume that robot movement will be similar to that performed during training • The average linear and angular velocity is obtained • Extracted from the difference between the poses of the training sequence • Some uncertainty is added in this phase (white noise)

  9. SIMD Approach • Update Phase • We use the information obtained from the robot’s sensors • Each particle is evaluated using the SIFT points extracted from the frame captured at the instant t  SP(ft) • To obtain a particle’s weight, we search for the training SIFT points representing the nearest pose to the particle’s pose SP(pi) • The weight of the particle is established as the percentage of common points between both set of points

  10. SIMD Approach • Update Phase – Room classification • After weighting all the particles we have to label the current frame • We compute the sum of all the particle’s weights, separately for the different rooms • Each frame is classified as the room that obtained the highest sum of weights • We don’t classify frames when the uncertainty about robot’s pose is high • The maximum sum of weights for a room must be higher than the 60% of the sum of all weights

  11. SIMD Approach • Update Phase – Additional Image processing • SIFT was complemented with basic processing, based on line and square recognition • Hough transform to study the distribution of the lines and squares • Some natural landmarks were discovered from training frames • Can be easily detected with lines and squares recognition • Examples: the corridor ceiling or the external door • Very low execution time

  12. SIMD Approach • The first implementation presented several problems due to wrong convergences • Some test frames are not represented by any training frame • False positives (noise, luminance changes) • We added a population re-initialization • Only when best particle’s weight is below a certain threshold • The system became unstable and had problems to converge • To solve these problems, we defined a state for the algorithm based on its stability • The process will be stable if the variation obtained for the x and y components of the last n pose estimations is sufficiently small • This happens when most of the particles are close together

  13. SIMD Approach • Stability estimation • The stability of the process is estimated at the end of each iteration • The populations initializations are modified and now we have three possibilities: • The process is stable • No population initializations are performed • The process has been stable for the last frames but suddenly it becomes unstable • New particles are spread over a restricted area, corresponding to the most reliable robot position (obtained from previous iterations) • The process is highly unstable • The particles are spread over the environment

  14. Prediction Phase Update Phase Estimate Algorithm Stability Unstable Initialize the particle population using the most reliable previous robot pose Highly Unstable Initialize the particle population over the environment Stable Keep the particle population Classify the picture with the most reliable room

  15. Particle initialization over a restricted area

  16. SIMD Approach • Unknown rooms detection • To detect this special situation, we study the particle distribution after the update phase • If most of these particles obtain a new position beyond the limits of the original scenario, the robot have entered and unknown room • We will classify a picture as unknown room if (at least) a 25% of particles are outside the well known rooms

  17. Results – Obligatory Track • Preliminary experiments • We trained three systems using three different illumination conditions • Each one of these systems was tested using three frame sequences

  18. Results – Obligatory Track • Final experiment • We obtained the tenth position for all the submitted runs

  19. Results – Optional Track • Preliminary experiments • We trained three systems using three different illumination conditions • Each one of these systems was tested using three frame sequences

  20. Results – Optional Track • Final experiment • We obtained the highest score for all the submitted runs

  21. Conclusions and future work Robust alternative to traditional localization methods for indoor environments The short execution time allows the system to be used in real time SIFT must be complemented with other techniques if lighting changes appear For future work, we aim to develop a general localization system capable of being trained automatically using the robot and its vision system

  22. A particle-filter-based self-localization method using invariant features as visual information Jesús Martínez Gómez jesus_martinez@dsi.uclm.es Ismael García-Varea Alejandro Jiménez-Picazo University of Castilla-La Mancha Spain

More Related