1 / 82

SLAM Summer School 2004

SLAM Summer School 2004. An Introduction to SLAM – Using an EKF Paul Newman Oxford University Robotics Research Group. A note to students.

kiona
Télécharger la présentation

SLAM Summer School 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SLAM Summer School 2004 An Introduction to SLAM – Using an EKF Paul Newman Oxford University Robotics Research Group

  2. A note to students The lecture I give will not include all these slides. Some of then and some of the notes I have supplied are more detailed than required and would take too long to deliver. I have included them for completeness and background – for example the derivation of the Kalman filter from Bayes Rule. I have included in the package a working matlab implementation of EKF based SLAM. You should be able to see all the properties of SLAM at work and be able to modify at your leisure. (without having to worry about the awkwardness of a real system to start with). I cannot cover all I would like to in the time available – where applicable, to fill gaps, I forward reference other talks that will be given during the week. I hope the talk, the slides and the notes will whet you appetite regarding what I reckon is great area of research. Above all, please please ask me to explain stuff that is unclear – this school is about you learning, not us lecturing. regards Paul Newman

  3. Overview • Kalman Filter was the first tool employed in SLAM – Smith Self and Cheeseman. • Linear KFs implement Bayes rule. No hokie-ness • We can analyse KF properties easily and learn interesting things about Bayesian SLAM • The vanilla, monolithic, KF-SLAM formulation is a fine tool for small local areas • But we can do better for large areas – as other speakers will mention

  4. 5 Minutes on Estimation

  5. Estimation is ….. Estimation Engine Data Estimate Prior Beliefs

  6. Minimum Mean Squared Error Estimation Choose x so argument is minimised Expectation operator (“average”)

  7. Evaluating…. From probability theory Very Important Thing

  8. Recursive Bayesian Estimation Key idea: “one mans posterior is another’s prior” ;-) Sequence of data (measurements) We want conditional mean (mmse) of x given Zk Can we iteratively calculate this – ie every time a new measurement comes in, update our estimate?

  9. Yes… At time k Explains data at time k as function of x at time k At time k-1

  10. And if these distributions are Gaussian turning the handle (see supporting material) leads to the Kalman filter……

  11. Kalman Filtering • Ubiquitous estimation tool • Simple to implement • Closely related to Bayes estimation and MMSE • Immensely Popular in robotics • Real time • Recursive (can add data sequentially) • It maintains the sufficient statistics of a Multidimensional • Gaussian PDF It is not that complicated! (trust me)

  12. Overall Goal To come up with a recursive algorithm that produces an estimate of state by processing data from a set of explainable measurements and incorporating some kind of plant model Measurement model Sensor H1 KF Estimate Sensor H2 Sensor Hn Plant Model Prediction/plant model True underlying state x

  13. Covariance is….. Multi-dimensional analogy of variance mean P is a symmetric matrix that describes a 1-standard deviation contour ( ellipsoid in 3D+ ) of the pdf

  14. The i|j notation true estimated Data up to t=j This is useful for derivations but we can never use it in a calc asx is unknown truth!

  15. The Basics We’ll use these equations as a starting point – I have supplied a full derivation in the support presentation and notes – think of a KF as an off-the-shelf estimation tool

  16. Crucial Characteristics • Asynchronisity • Prediction Covariance Inflation • Update Covariance Deflation • Observability • Correlations

  17. Nonlinear Kalman Filtering • Same trick as in Non-linear Least Squares: • Linearise around a current estimate using jacobian • Problem becomes linear again Complete derivation is in the notes but…

  18. Recalculate Jacs at each iteration

  19. Using The EKF in Navigation

  20. Vehicle Models - Prediction control Truth model

  21. Noise is in control….

  22. Effect of control noise on uncertainty:

  23. Using Dead-Reckoned Data

  24. Navigation Architecture

  25. Background T-Composition Compounding transformations

  26. Just functions!

  27. Deduce an Incremental Move These can be in massive error But the common error is subtracted out here

  28. Use this “move” as a control Substitution into Prediction equation (using J1 and J2 as Jacobians): Diagonal covariance matrix (3x3) of error in uo

  29. Feature Based Mapping and Navigation Look at the code!!

  30. Mapping vs Localisation

  31. Problem Space

  32. Problem Geometry

  33. Landmarks / Features Things that standout to a sensor: Corners, windows, walls, bright patches, texture… Map Point Feature called “i”

  34. Observations / Measurements • Relative • On Vehicle sensing environment: • Radar • Cameras • Odometry (really) • Sonar Laser • Absolute • Relies on infrastructure: • GPS • Compass How smart can we be with relative only measurements?

  35. And once again… It is all about probability

  36. From Bayes Rule….. Input is measurements conditioned on map and vehicle Data: We want to use Bayes rule to “invert this” and get maps and vehicles given measurements.

  37. Problem 1 - Localisation Remove line p(.) = 1 from notes. Mistake

  38. We can use a KF for this! Plant Model Remember: u is control, J’s are a fancy way of writing jacobians (composition operator). Q is strength of noise in plant model.

  39. Processing Data r

  40. Implementation No features seen here

  41. Location Covariance

  42. Location Innovation

  43. Problem II Mapping Map With known vehicle The state vector is the map

  44. But how is map built? Key Point: State Vector GROWS! New, bigger map Obs of new feature Old map “State augmentation”

  45. How is P augmented? Simple! Use the transformation of covariance rule.. G is the feature initialisation function

  46. Leading to : Angle from Veh to feature Vehicle orientation

  47. So what are models h and f? h is a function of the feature being observed: f is simply the identity transformation :

  48. Turn the handle on the EKF: All hail the Oracle ! How do we know whatfeature we are observing?

  49. Problem III SLAM “Build a map and use it at the same time” “This a cornerstone of autonomy”

More Related