1 / 22

4. Linear optimal Filters and Predictors

4. Linear optimal Filters and Predictors. ADSLAB. 윤영규. 2008. 10. 23. 4. Linear optimal Filters and Predictors. 4.3 KALMAN-BUCY FILTER 4.4 OPTIMAL LINEAR PREDICTORS 4.4.1 Prediction as Filtering 4.4.2 Accommodating Missing Data 4.5 CORRELATED NOISE SOURCES

malia
Télécharger la présentation

4. Linear optimal Filters and Predictors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 4. Linear optimal Filters and Predictors ADSLAB 윤영규 2008. 10. 23

  2. 4. Linear optimal Filters and Predictors 4.3 KALMAN-BUCY FILTER 4.4 OPTIMAL LINEAR PREDICTORS 4.4.1 Prediction as Filtering 4.4.2 Accommodating Missing Data 4.5 CORRELATED NOISE SOURCES 4.5.1 Correlation between Plant and Measurement Noise 4.5.2 Time-Correlated Measurements 4.6 RELATIONSHIPS BETWEEN KALMAN AND WIENER FILTERS 4.7 QUADRATIC LOSS FUNCTIONS 4.7.1 Quadratic Loss Functions of Estimation Error 4.7.2 Expected Value of a Quadratic Loss Function 4.7.3 Unbiased Estimates and Quadratic Loss

  3. 4.3 KALMAN-BUCY FILTER(1/6) Analogous to the discrete-time case, the continuous-time random process x(t)and the observation z(t) are given by (4.27) (4.28) (4.29) (4.30) (4.31) where F(t), G(t), H(t), Q(t),and R(t) are nxn; nxn, lxn, nxn, and lxl matrices, respectively. The term is the Dirac delta. The covariance matrices Q and R are positive definite.

  4. 4.3 KALMAN-BUCY FILTER(2/6) It is desired to find the estimate of n state vector x(t) represented by which is a linear function of the measurements z(t) ,0<t<T ,which minimizes the scalar equation where M is a symmetric positive-definite matrix. The initial estimate and covariance matrix are and . This section provides a formal derivation of the continuous-time Kalman estimator. A rigorous derivation can be achieved by using the orthogonality principle as in the discrete-time case. In view of the main objective (to obtain efficient and practical estimators), less emphasis is placed on continuous-time estimators.

  5. 4.3 KALMAN-BUCY FILTER(3/6) Let be the time interval .As shown in Chapters 2 and 3, the following relationships are obtained: where consists of terms with powers of greater than or equal to two. For measurement noise and for process noise

  6. 4.3 KALMAN-BUCY FILTER(4/6) Equations 4.24 and 4.26 can be combined. By substituting the above relations, one can get the result (4.33) (4.34) Higher order terms The Kalman gain of Equation 4.19 becomes, in the limit, (4.35)

  7. 4.3 KALMAN-BUCY FILTER(5/6) Substituting Equation 4.35 in 4.34 and taking the limit as , one obtains the desired result (4.36) with as the initial condition. This is called the matrix Riccati differential equation . Methods for solving it will be discussed in Section 4.8. The differential equation can be rewritten by using the identity to transform Equation 4.36 to the form (4.37)

  8. 4.3 KALMAN-BUCY FILTER(6/6) In similar fashion,the state vector update equation can be derived from Equations 4.21 and 4.25 by taking the limit as to obtain the differential equation for the estimate: with initial condition x. Equations 4.35,4.37,and 4.38 define the continuous-time Kalman estimator,which is also called the Kalman-Bucy filter.

  9. 4.4 OPTIMAL LINEAR PREDICTORS(1/2) 4.4.1 Prediction as Filtering Prediction is equivalent to filtering when the measurement data are not available or are unreliable. In such cases,the Kalman gain matrix is forced to be zero. Hence, Equations 4.21,4.25,and 4.38 become and Previous values of the estimates will become the initial conditions for the above equations.

  10. 4.4 OPTIMAL LINEAR PREDICTORS(2/2) 4.4.2 Accommodating Missing Data It sometimes happens in practice that measurements that had been scheduled to occur over some time interval are, in fact, unavailable or unreliable. The estimation accuracy will suffer from the missing information, but the filter can continue to operate without modification. One can continue using the prediction algorithm given in Section 4.4 to continually estimate for using the last available estimate until the measurements again become useful (after ). It is unnecessary to perform the observational update, because there is no information on which to base the conditioning. In practice, the filter is often run with the measurement sensitivity matrix H=0 so that, in effect, the only update performed is the temporal update.

  11. 4.5 CORRELATED NOISE SOURCES(1/4) 4.5.1 Correlation between Plant and Measurement Noise We want to consider the extensions of the results given in Sections 4.2 and 4.3, allowing correlation between the two noise processes (assumed jointly Gaussian). Let the correlation be given by for the discreat-time case for the continuous-time case For this extension, the discrete-time estimators have the same initial conditions and state estimate extrapolation and error covariance extrapolation equations. However, the measurement update equations in Table 4.3 have been modified as

  12. 4.5 CORRELATED NOISE SOURCES(2/4) Similarly,the continuous-time estimator algorithms can be extended to include the correlation. Equation 4.35 is changed as follows :

  13. 4.5 CORRELATED NOISE SOURCES(3/4) 4.5.2 Time-Correlated Measurements Correlated measurement noise can be modeled by a shaping filter driven by white Gaussian noise (see Section 3.6). Let the measurement model be given by where (4.41) and Z is zero-mean white Gaussian. Equation 4.1 is augmented by Equation 4.41,and the new state vector satisfies the difference equation:

  14. 4.5 CORRELATED NOISE SOURCES(4/4) The measurement noise is zero, . The estimator algorithm will work as long invertible. Details of numerical difficulties of this problem (when is singular) are given in Chapter 6. For continuous-time estimators, the augmentation does not work because is required. Therefore, must exist. Alternate tech-niques are required. For detailed information see Gelb et al.

  15. 4.6 RELATIONSHIPS BETWEEN KALMAN AND WIENER FILTERS(1/2) The Wiener filter is defined for stationary systems in continuous time, and the Kalman filter is defined for either stationary or nonstationary systems in either discrete time or continuous time, but with finite-state dimension. To demonstrate the connections on problems satisfying both sets of constraints, take the continuous-time Kalman-Bucy estimator equations of Section 4.3,letting F, G, and H be constants, the noises be stationary (Q and R constant),and the filter reach steady state (P constant). That is, as then . The Riccati differential equation from Section 4.3 becomes the algebraic Riccati equation for continuous-time systems. The positive-definite solution of this algebraic equation is the steady-state value of the covariance matrix, . The Kalman-Bucy filter equation in stead y state is then

  16. 4.6 RELATIONSHIPS BETWEEN KALMAN AND WIENER FILTERS(2/2) Take the Laplace transform of both sides of this equation,assuming that the initial conditions are equal to zero,to obtain the following transfer function: where the Laplace transforms and . This has the solution where the steady-state gain This transfer function represents the steady-state Kalman-Bucy filter,which is identical to the Wiener filter [30].

  17. 4.7 QUADRATIC LOSS FUNCTIONS(1/5) The Kalman filter minimizes any quadratic loss function of estimation error. Just the fact that it is unbiased is sufficient to prove this property,but saying that the estimate is unbiased is equivalent to saying that . That is,the estimated value is the mean of the probability distribution of the state. 4.7.1 Quadratic Loss Functions of Estimation Error A loss function or penalty function is a real-valued function of the outcome of a random event. A loss function re ects the value of the outcome. Value concepts can be somewhat subjective. In gambling,for example,your perceived loss function for the outcome of a bet may depend upon your person ality and current state of winnings,as well as on how much you have riding on the bet. Loss Functions of Estimates. In estimation theory,the perceived loss is generally a function of estimation error (the difference between an estimated function of the outcome and its actual value),and it is generally a monotonically increasing function of the absolute value of the estimation error.

  18. 4.7 QUADRATIC LOSS FUNCTIONS(2/5) Quadratic Loss Functions. If x is a real n-vector (variate) associated with the outcome of an event and is an estimate of x, then a quadratic loss function for the estimation error x A x has the form where M is a symmetric positive-definite matrix. One may as well assume that M is symmetric,because the skew-symmetric part of M does not in uence the quadratic loss function. The reason for assuming positive definiteness is to assure that the loss is zero only if the error is zero,and loss is a monotonically increasing function of the absolute estimation error.

  19. 4.7 QUADRATIC LOSS FUNCTIONS(3/5) 4.7.2 Expected Value of a Quadratic Loss Function Loss and Risk. The expected value of loss is sometimes called risk. It will be shown that the expected value of a quadratic loss function of the estimation error is a quadratic function of , where . This demonstration will depend upon the following identities: (4.43) (4.44) (4.45) (4.46) (4.47) (4.48) (4.49)

  20. 4.7 QUADRATIC LOSS FUNCTIONS(4/5) Risk of a Quadratic Loss Function. In the case of the quadratic loss function defined above, the expected loss (risk) will be (4.50) (4.51) (4.52) (4.53) (4.54) (4.55) which is a quadratic function of with the added nonnegative constant trace[MP].

  21. 4.7 QUADRATIC LOSS FUNCTIONS(5/5) 4.7.3 Unbiased Estimates and Quadratic Loss The estimate minimizes the expected value of any positive-definite quadratic loss function. From the above derivation, and (4.56) (4.57) only if where it has been assumed only that the mean and covariance are defined for the probability distribution of x. This demonstrates the utility of quadratic loss functions in estimation theory: They always lead to the mean as the estimate with minimum expected loss (risk). Unbiased Estimates. An estimate x is called unbiased if the expected estimation error . What has just been shown is that an unbiased estimate minimizes the expected value of any quadratic loss function of estimation error.

  22. continuous-time Kalman estmator 유도 형식. • 이전 내용의 수식을 이용한 Kalman-bucy filter, Optimal linear predictors, Correlated noise sources, Relationships between Kalman and Winer filters, Quadratic loss functions 등의 유도 형식.

More Related