1 / 42

Mobile Robot controlled by Kalman Filter

Mobile Robot controlled by Kalman Filter. Thanks for your attention!. Overview. What could Kalman Filters be used for? What is a Kalman Filter? Conceptual Overview The Theory of Kalman Filter (only the equations you need to use) Simple Examples. Most Generally: WHAT IS Kalman Filter?.

brownmark
Télécharger la présentation

Mobile Robot controlled by Kalman Filter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mobile Robot controlled by Kalman Filter Thanks for your attention!

  2. Overview • What could Kalman Filters be used for? • What is a Kalman Filter? • Conceptual Overview • The Theory of Kalman Filter (only the equations you need to use) • Simple Examples

  3. Most Generally: WHAT IS Kalman Filter? • What is the Kalman Filter? • A technique that can be used to recursively estimate unobservable quantities called state variables, {xt}, from an observed time series {yt}. • What is it used for? • Tracking missiles • Extracting lip motion from video • Lots of computer vision applications • Economics • Navigation

  4. Example of estimation problem

  5. Estimating the location of a ship Example of Estimation Problem • “Suppose that you are lost at sea during the night and have no idea at all of your location.” Problem? Inherent measuring device inaccuracies. Your measurement has some uncertainty!

  6. How to model the Uncertainty of measurement? • Let us write the conditional probability of the density of position based on measured value z1 • Assume Gaussian distribution • z1 : Measured position • x : Real position Real position • Q: What can be a measure of uncertainty?

  7. Can we combine Measurements? • You make a measurement • Also, your friend makes a measurement • Question 1. Which one is the better? • Question 2. What’s the best way to combine these measurements

  8. Example how to combine two measurements • Uncertainty is decreased by combining the two pieces of Information !!

  9. What does it mean for a robot? We will use this in next slide • Optimal estimate at t2, , is equal to the best prediction of its value before z2 is taken, • Plus a correction term of an optimal weighting value • times the difference between z2 • and the best prediction of its value • before it is actually taken, .

  10. Derivation of product of two PDFs from last slide Given are two PDFs

  11. What we have discussed • Two lectures ago we discussed product of probabilities on discrete examples • Last lecture we discussed product of PDFs - Gaussians

  12. The “noise” w will be modeled as a white Gaussian noise with a mean of zero and variance of . How to calculate the best estimate when you are moving? • u is a nominal velocity • w is a noisy term • Suppose you’re moving • Best estimate of new position takes into account new measurement • Best prediction of new position Nominal velocity New variance

  13. Summary on models, prediction and correction • Process Model • Describes how the state changes over time • Measurement Model • Where you are from what you see !!! • Predictor-corrector • Predicting the new state and its uncertainty • Correcting with the new measurement

  14. What is a Filter by the way?

  15. What is a Filter by the way? • Define mathematically what a filter is (make an analogy to a real filter) • Other applications of Kalman Filtering (or Filtering in general): • Your Car GPS (predict and update location) • Surface to Air Missile (hitting the target) • Ship or Rocket navigation (Appollo 11 used some sort of filtering to make sure it didn’t miss the Moon!)

  16. The “filtering Problem” in General (let’s get a little more technical) • Black Box • Sometimes the system state and the measurement may be two different things System Error Sources • System state cannot be measured directly • Need to estimate “optimally” from measurements • External Controls System • System State (desired but not known) • Optimal Estimate of System State • Observed Measurements Measuring Devices • Estimator • Measurement Error Sources

  17. FILTERS IN MOBILE ROBOTS

  18. Problem Statement: Mobile Robot Control • Examples of systems in need for state prediction: • A mobile robot moving within its environment • A vision based system tracking cars in a highway • Common characteristics of these systems: • A state that changes dynamically • State cannot be observed directly • Uncertainty is due to noise in: • state • way state changes • observations

  19. Mobile Robot Localization uses landmarks. • REMINDER: Where am I? • Given a map, determine the robot’s location • Landmark locations are known, • but the robot’s position is not known • From sensor readings, the robot must be able to infer its most likely position on the field • Example : where are the AIBOs on the soccer field?

  20. Mobile Robot Mapping uses Landmarks • What does the world look like? • Robot is unaware of its environment • The robot must explore the world and determine its structure • Most often, this is combined with localization • Robot must update its location with respect to the landmarks • Known in the literature as Simultaneous Localization and Mapping, or Concurrent Localization and Mapping : SLAM (CLM) • Example : AIBOs are placed in an unknown environment and must learn the locations of the landmarks • (An interesting project idea?)

  21. x state p probability y observation h measurement A Dynamic System • Most commonly - Available: • Initial State • Observations • System (motion) Model • Measurement (observation) Model

  22. Filters must be optimal • Filters compute the hidden state from observations • Filters: • Terminology from signal processing • Can be considered as a data processing algorithm. • Filters are Computer Algorithms or hardware devices (FPGA) • Filters do classification • Classification: Discrete time versus Continuous time Issues: • Sensor fusion • Robustness to noise Wanted: each filter to be optimal in some sense.

  23. Example : Navigating Robot with odometry Input x state P probability y observation H measurement • Motion model is done according to odometry. • Observation model is done according to sensor measurements. • Localization -> inference task • Mapping -> learning task Remember concepts of inference and learning • Inference can be Bayesian

  24. x state P probability y observation h measurement Bayesian Estimation is based on Markov’s assumption Bayesian estimation: Attempt to construct the posterior distribution of the state given all measurements. Inference task (localization)Compute the probability that the system is at state z at time t given all observations up to time t Note: state only depends on previous state (first order Markov assumption)

  25. x state P probability y observation H measurement z = data Recursive Bayes Filter • Bayes Filter • Two steps: Prediction Step - Update step • Advantages over batch processing • Online computation - Faster - Less memory - Easy adaptation • Example of simple recursive Bayes Filter: two states: A,B Possible states and other data • It is like generalized flip-flop – door open , door closed from the past lecture

  26. Continuous representation Gaussian distributions Kalman filters (Kalman60) Discrete representation HMM Solve numerically Grid (Dynamic) Grid based approaches (e.gMarkov localization - Burgard98) Samples Particle Filters (e.g.Monte Carlo localization - Fox99) x state P probability y observation H measurement Recursive Bayes FilterImplementations Assuming Bayes Filter as here, from last slide How is the prior distribution represented? How is the posterior distribution calculated?

  27. Example: State Representations for Robot Localization • These three are most often used, there are many others Grid Based approaches (Markov localization) Particle Filters (Monte Carlolocalization) Kalman Tracking

  28. Example: Localization – Grid Based • Initialize Grid(Uniformly or according to prior knowledge) • At each time step: • For each grid cell • Use observation model to compute • Use motion model and probabilities to compute • Normalize x state P motion y observation H measurement

  29. Bayesian Filter

  30. Why Bayesian Filters are so important? • Why should you care about Bayesian Filters? • Robot and environmental state estimation is a fundamental problem in mobile robotics, and in our projects of GuideBot ! • Nearly all algorithms that exist for spatial reasoning make use of this approach • If you’re working in mobile robotics, you’ll see it over and over! • Very important to understand and appreciate • Bayesian Filters are Efficient state estimators • Recursively compute the robot’s current state based on the previous state of the robot • What is the robot’s state?

  31. x state P motion y observation H measurement Z = data Bayesian Filter: link to known concepts • Estimate state x from data d • What is the probability of the robot being at x? • x could be robot location, map information, locations of targets, etc… • d could be sensor readings such as range, actions, odometry from encoders, etc…) • This is a general formalism that does not depend on the particular probability representation • Bayes filter recursively computes the posterior distribution:

  32. What is a Posterior Distribution?

  33. Derivation of the Bayesian Filter

  34. x state P motion y observation H measurement data, Z observations oi actions ai Derivation of the Bayesian Filter (slightly different notation from before) • Estimation of the robot’s state given the data: The robot’s data, Z, is expanded into two types: observations oiand actions ai • Invoking the Bayesian theorem

  35. x state P motion y observation H measurement data, Z observations oi actions ai Derivation of the Bayesian Filter review Denominator is constant relative to xt First-order Markov assumption shortens first term: Expanding the last term (theorem of total probability):

  36. x state P probability y observation H measurement data, Z observations oi actions ai Derivation of the Bayesian Filter review First-order Markov assumption shortens middle term: Finally, substituting the definition of Bel(xt-1): The above is the probability distribution that must be estimated from the robot’s data

  37. Iterating the Bayesian Filter review • Propagate the motion model: • Update the sensor model: • Compute the current state estimate before taking a sensor reading by integrating over all possible previous state estimates and applying the motion model • Compute the current state estimate by taking a sensor reading and multiplying by the current estimate based on the most recent motion history

  38. Initial state • detects nothing: • Moves and • detects landmark: • Moves and • detects nothing: • Moves and • detects landmark: Localization Reminder

  39. Bayesian Filter : Requirements for Implementation • Representation for the belief function • Update equations • Motion model • Sensor model • Initial belief state This applies to any Bayes filter We have discussed all these components already

  40. Representation of the Belief Function Sample-based representations e.g.Particle filters • Parametric representations • There can be many sample-based representations • There can be many parametric representations

  41. References • You can find useful materials about HMM from • CS570 AI Lecture Note(2003) • http://www.idiap.ch/~bengio/ • http://speech.chungbuk.ac.kr/~owkwon/ • You can find useful materials about Kalman Filter from • http://www.cs.unc.edu/~welch/kalman • Maybeck, 1979, “Stochastic models, estimation, and control” • Greg Welch, and Gray Bishop, 2001, “An introduction to the Kalman Filter”

  42. Sources Paul E. Rybski HarisBaltzakis

More Related