1 / 73

ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born

ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least Squares Estimate. Announcements. Homework 2 Graded Comments included on D2L Any questions, talk with us this week Homework 3 Graded by Thursday EOD

finnea
Télécharger la présentation

ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least Squares Estimate

  2. Announcements • Homework 2 Graded • Comments included on D2L • Any questions, talk with us this week • Homework 3 Graded by Thursday EOD • Homework 4due Thursday • Solutions online now. • The solutions show a lot of information – be sure to demonstrate that you know the answers and are not just copying the solutions! • Homework 5 out today

  3. Homework 5 • Problem 1: • Simulate the Stat OD process on a 2-dimensional, point-mass orbit. • Problem 2: • Simple Batch Process exercise. • Problem 3: • Process noisy observations of a spring-mass system.

  4. Homework 5

  5. Topics Today • Review of Stat OD Concepts • Least Squares Estimator • Quiz: Observability • ~10% of the class got 4/4 • Majority of the class got 2/4 • We’ll go over the topic more closely toward the end of the lecture!

  6. Review of Stat OD Concepts

  7. Observation Residuals • We have noisy observations of certain aspects of the system. • We need some way to relate each observation to the trajectory that we’re estimating. X* Observed Range Computed Range True Range = ??? ε = O-C = “Residual”

  8. Fitting the Data • How do we best fit the data? Residuals = ε = O-C No No Not bad Good

  9. Fitting the Data • Lp norm estimators p = 1: even weights, but discontinuous p = 2: favors outliers Interior options: optimal for certain scenarios that involve outliers.

  10. Fitting the data • A good solution, and one easy to code up, is the least-squares solution

  11. State Deviation and Linearization • Linearization • Introduce the state deviation vector • If the reference/nominal trajectory is close to the truth trajectory, then a linear approximation is reasonable.

  12. State Deviation and Linearization • Goal of the Stat OD process: • The best fit trajectoryis represented by This is what we want

  13. State Deviation Mapping • How do we map the state deviation vector from one time to another? • The state transition matrix. • It tracks how a perturbation grows/shrinks/changes over time along the trajectory.

  14. State Transition Matrix • The state transition matrix maps a deviation in the state from one epoch to another. • It is constructed via numerical integration, in parallel with the trajectory itself.

  15. The A Matrix • The “A” Matrix:

  16. The Typical Stat OD A Matrix

  17. The Typical Stat OD A Matrix

  18. The Typical Stat OD A Matrix

  19. Process • Step 1: Define your state and your dynamics.

  20. Process • Step 1: Define your state and your dynamics.

  21. Process • Step 2: Integrate your state and STM. Use the “reshape” command: X0 = [ X; reshape(Phi, 8*8, 1) ]

  22. Process • Step 2: Integrate your state and STM.

  23. Process • Step 2: Integrate your state and STM. • Did we do too much work? Yes, yes, we did.

  24. Process • Step 2: Integrate your state and STM. • Did we do too much work? Yes, yes, we did. Don’t have to integrate these parameters!

  25. Process • Step 2: Integrate your state and STM. • Did we do too much work? Yes, yes, we did. Don’t have to integrate these parameters! Might not have to integrate these either!

  26. Process • Step 2: Integrate your state and STM. Phi0 = eye(6,8) X0 = [ X; reshape(Phi, 6*8, 1) ]

  27. Process • Step 2: Integrate your state and STM. • This is important for your final project. • Rather than integrating 19x18 = 342 equations,Integrate only the 10x6 = 60 equations that change over time.

  28. Process • Step 3: Process observations. • Step 4: Determine a new best estimate of the state. • Step 5: Iterate/repeat as needed.

  29. Mapping an observation • How do we map an observation to the trajectory?

  30. Measurement Mapping Matrix • The Mapping Matrix

  31. Measuring the residuals • Our state and measurement mappings. ε = O-C = “Residual” X*

  32. Relating the observations to an epoch • If the state has n unknown parameters and there are lobservations at different times • Then there are a total of n×l unknowns. • If we can instead relate each state to an epoch, then we can reduce it to only n unknown state parameters. If the errors are equal to zero, then we can deterministically solve this system using n linearly independent observations.

  33. Relating the observations to an epoch

  34. Relating the observations to an epoch If the errors are not zero, then we now have n unknown state parameters and l unknown errors. If we have multiple observation types (say p types), then we have m = p×l observations and a total of m unknown observation errors. Total unknowns: m+n Total equations: m Least squares criterion provides a best estimate solution in this scenario.

  35. How do we solve the problem? • Least squares estimate

  36. How do we solve the problem? • Least squares estimate

  37. Best estimate • The state deviation vector that minimizes the least-squares cost function: • Some observations of this solution: • What is the k ? • Are all data points equally weighted? • Does this solution consider correlated data or statistical information in the observations? • What happens if we acquire more data?

  38. Best estimate • The state deviation vector that minimizes the least-squares cost function: • Additional Details: • is called the normal matrix • If H is full rank, then this will be positive definite. • If it’s not then we don’t have a least squares estimate! • The best estimate of the observation errors:

  39. Example 4.2.1

  40. Example 4.2.1 Or in component form: Expressed in first order form:

  41. Example 4.2.1

  42. Example 4.2.1

  43. Example 4.2.1

  44. Example 4.2.1

  45. Example 4.2.1: Observability • Notice that all variables end up in either the A or matrix, but not necessarily both. A

  46. Example 4.2.1: Integration Efficiency • Which equations of motion do we need to integrate? A

  47. Example 2 • The previous example (4.2.1 from the book) left us off at the definitions of the matrices. • Next example takes us through the least squares estimate.

  48. Least Squares Example Example of least squares Let (Note that this is a linear system) Assume we wish to estimate

  49. Least Squares Example Note that HTHis always a symmetric matrix

  50. Least Squares Example Assume: Then

More Related