1 / 22

SYSTEMS Identification

SYSTEMS Identification. Ali Karimpour Assistant Professor Ferdowsi University of Mashhad. Reference: “System Identification Theory For The User” Lennart Ljung(1999). Lecture 3. Simulation and prediction. Topics to be covered include : Simulation. Prediction. Observers.

cate
Télécharger la présentation

SYSTEMS Identification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SYSTEMSIdentification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart Ljung(1999)

  2. Lecture 3 Simulation and prediction Topics to be covered include: • Simulation. • Prediction. • Observers.

  3. Undisturbed output is Let Let (by a computer) Disturbance is Output is Simulation Suppose the system description is given by

  4. Disturbance is H must be stable so: With Prediction (Invertibility of noise model) Invertibility of noise model

  5. Lemma 1 Consider v(t) defined by Assume that filter H is stable and let: Assume that 1/H(z) is stable and: Then H-1(q) is inverse of H(q) and Define H-1(q) by Prediction (Invertibility of noise model) Exercise1: Proof Lemma1

  6. Let That is This is a moving average of order 1, MA(1). Then If |c| < 1 then the inverse filter is determined as So e(t) is: Prediction (Invertibility of noise model) Example 1 A moving average process Exercise2: Show the validity of example 1 by an example for c=0.2 and c=0.9 c=1.2. Exercise3: Exercise 3T.1

  7. Prediction (One-step-ahead prediction of v) Now we want to predict v(t) based on the pervious observation Now the knowledge of v(s), s≤t-1 implies the knowledge of e(s), s≤t-1 according to inevitability. Also we have Suppose that the PDF of e(t) be denoted by fe(x) so: Now we want to know fv(x) so:

  8. So the (posterior) probability density function of v(t), given observation up to time t-1, is 1- Maximum a posteriori prediction (MAP): Use the value for which PDF has its maximum. 2- Conditional exceptation: Use the mean value of the distribution in question. Prediction (One-step-ahead prediction of v) We use mostly the blocked one. Exercise4: Exercise 3D.3 Exercise5: Exercise 3E.4

  9. Prediction (One-step-ahead prediction of v) ? Suppose that H(q) is inversely stable and monic, what about H-1(q) ? Exercise6: Exercise 3T.1 Conditional exceptation Alternative formula

  10. Conditional exceptation Let Alternative formula That is Prediction (One-step-ahead prediction of v) Example 3.2 A moving average process

  11. Conditional exceptation Let Alternative formula That is Prediction (One-step-ahead prediction of v) Example 3.3

  12. Prediction (One-step-ahead prediction of y) Let Suppose v(s) is known for s≤ t-1 and u(s) are known for s≤ t . Since

  13. Prediction (One-step-ahead prediction of y) The prediction error So the variable e(t) is the part of y(t) that can not be predicted from past data. It is also called innovation at time t. Unknown initial condition Since only data over the interval [0 , t-1] exist so The exact prediction involves time-varying filter coefficients and can be computed using the Kalman filter.

  14. Prediction (k-step-ahead prediction of v) First of all we need k-step-ahead prediction of v Unknown at t Known at t k-step-ahead predictor of v

  15. Prediction (k-step-ahead prediction of y) k-step-ahead prediction of v is: Suppose we have measured y(s) for s≤ t and u(s) is known for s≤ t+k-1. So let

  16. Exercise7: Show that the k-step-ahead prediction of can also viewed as a one-step-ahead predictor associated with the model: Prediction (k-step-ahead prediction of y) k-step-ahead prediction of y is: Define prediction error of k-step-ahead prediction as: Exercise8: Show that prediction error of k-step-ahead prediction is a moving average of e(t+k) , … ,e(t+1) Exercise9: Exercise 3E.2

  17. Observer In many cases we ignore noises, so deterministic model is used This description used for “computing,” “guessing,” or “predicting”. So we need the concept of observer. As an example let: This means that So we have

  18. Exercise (3E.3): Show that if Then for the noise model H(q)=1, (I) is the natural predictor, whereas the noise model Leads to the predictor (II) Observer So we have If input output data are lacking prior to time t = 0 , first one suffers from an error, but second one still is correct for t > 0. In the other hand first one is un affected by measurement errors in the output, but second one affected. So the choice of predictor could be seen as a trade-off between sensitivity with respect to output measurement errors and rapidly decaying effects of erroneous initial conditions.

  19. Observer A family of predictor for So the choice of predictor could be seen as a trade-off between sensitivity with respect to output measurement errorsand rapidly decaying effects oferroneous initial conditions. To introduce design variables for this trade-off, choose a filter W(q) such that Applying it to both sides we have Which means that The right hand side of this expression depends only on y(s), s≤t-k, and u(s). s ≤t-1. So

  20. Observer A family of predictor for The trade-off considerations for the choice of W could then be 1. Select W(q) so that both W and WG have rapidly decaying filter coefficients in order to minimize the influence of erroneous initial conditions. 2. Select W(q) so that measurement imperfections in y(t) are maximally attenuated. The later issue can be shown in frequency domain. Suppose that The prediction error is:

  21. Observer A family of predictor for The prediction error is: The spectrum of this error is, according to Theorem 2.2: The problem is thus to select W, such that the error spectrum has an acceptable size and suitable shape.

  22. Observer Fundamental role of the predictor filter Let Then y predicted as: They are linear filters since: Linear Filter

More Related