1 / 12

Recursive Least-Squares (RLS) Adaptive Filters

Recursive Least-Squares (RLS) Adaptive Filters. Definition. With the arrival of new data samples estimates are updated recursively. Introduce a weighting factor to the sum-of-error-squares definition. two time-indices n: outer, i: inner. Weighting factor. Forgetting factor.

pabla
Télécharger la présentation

Recursive Least-Squares (RLS) Adaptive Filters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recursive Least-Squares (RLS)Adaptive Filters

  2. Definition • With the arrival of new data samples estimates are updated recursively. • Introduce a weighting factor to the sum-of-error-squares definition two time-indices n: outer, i: inner Weighting factor Forgetting factor : real, positive, <1, →1 =1 → ordinary LS 1/(1- ): memory of the algorithm (ordinary LS has infinite memory) w(n) is kept fixed during the observation interval 1≤i ≤n for which the cost function (n) is defined.

  3. Definition

  4. Regularisation • LS cost function can be ill-posed • There is insufficient information in the input data to reconstruct the input-output mapping uniquely • Uncertainty in the mapping due to measurement noise. • To overcome the problem, take ‘prior information’ into account • Prewindowing is assumed! • (not the covariance method) Regularisation term Smooths and stabilises the solution : regularisation parameter

  5. Normal Equations • From method of least-squares we know that then the time-average autocorrelation matrix of the input u(n) becomes • Similarly, the time-average cross-correlation vector between the tap inputs and the desired response is (unaffected from regularisation) • Hence, the optimum (in the LS sense) filter coefficients should satisfy autocorrelation matrix is always non-singular due to this term. (-1 always exists!)

  6. Recursive Computation • Isolate the last term for i=n: • Similarly • We need to calculate -1 to find w→ direct calculation can be costly! • Use Matrix Inversion Lemma (MIL)

  7. Recursive Least-Squares Algorithm • Let • Then, using MIL • Now, letting • We obtain inverse correlation matrix gain vector Riccati equation

  8. Recursive Least-Squares Algorithm • Rearranging • How can w be calculated recursively? Let • After substituting the recursion for P(n) into the first term we obtain • But P(n)u(n)=k(n), hence

  9. Recursive Least-Squares Algorithm • Theterm is calledthe a prioriestimationerror, • Whereastheterm is calledthe a posterioriestimationerror. • Summary; theupdateeqn. • -1 is calculatedrecursivelyandwithscalardivision • Initialisation: (n=0) • If no a prioriinformationexists gain vector a priori error regularisation parameter

  10. Recursive Least-Squares Algorithm

  11. Recursive Least-Squares Algorithm

  12. Ensemble-Average Learning Curve

More Related