1 / 54

ADVANCED DIGITAL SIGNAL PROCESSING EENG 413

ADVANCED DIGITAL SIGNAL PROCESSING EENG 413. Contents. Unit I Parametric Methods for Power Spectrum Estimation Unit II Non-Parametric Methods for Power Spectrum Estimation Unit III Adaptive Signal Processing Unit IV Multirate Signal Processing Unit V Discrete Transforms.

ferber
Télécharger la présentation

ADVANCED DIGITAL SIGNAL PROCESSING EENG 413

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ADVANCED DIGITAL SIGNAL PROCESSING EENG 413

  2. Contents • Unit I Parametric Methods for Power Spectrum Estimation • Unit II Non-Parametric Methods for Power Spectrum Estimation • Unit III Adaptive Signal Processing • Unit IV Multirate Signal Processing • Unit V Discrete Transforms

  3. Reference Books • John G.Proakis, Dimitris G.Manobakis, “Digital Signal Processing, Principles, Algorithms and Applications”, Third edition, PHI, 2001. • Monson H.Hayes, “Statistical Digital Signal Processing and Modeling”, Wiley, 2002. • Roberto Crist, “Modern Digital Signal Processing”, Thomson Brooks/Cole, 2004. • Raghuveer. M. Rao, Ajit S.Bopardikar, “Wavelet Transforms, Introduction to Theory and applications”, Pearson Education, Asia, 2000. • K.P Soman, K.I Ramachnadran and N.G Reshmi, “Insights into wavelets: From theory to Practice”, 3rd Edition,PHI,2010.

  4. Unit I Parametric Methods for Power Spectrum Estimation • Relationship between the auto correlation and the model parameters • Yule Walker method for the AR Model Parameters • Burg method for the AR model parameters • Unconstrained least square method for the AR model parameters • Sequential estimation methods for the AR model parameters

  5. Introduction • What is Spectral Estimation? From a finite record of a stationary data sequence, estimate how the total power is distributed over frequencies , or more practically, over narrow spectral bands (frequency bins)

  6. Spectral Estimation Methods Spectral Estimation Parametric Non Parametric Ex: Periodogram and Welch method AR, ARMA based Subspace Based (high-resolution) Model fitting based Ex: MUSIC and ESPRIT Ex: Least Squares AR: Autoregressive (all-pole IIR) ARMA: Autoregressive Moving Average (IIR) MUSIC: MUltiple SIgnal Classification ESPRIT: Estimation of Signal Parameters using Rotational Invariance Techniques

  7. Spectral Estimation Methods • Classical (Nonparametric) Methods Ex. Pass the data through a set of band-pass filters and measure the filter output powers. • Parametric (Modern) Approaches Ex. Model the data as a sum of a few damped sinusoids and estimate their parameters. Trade-Offs: (Robustness vs. Accuracy) • Parametric Methods may offer better estimates if data closely agrees with assumed model • Otherwise, Nonparametric Methods may be better

  8. Few Applications of Spectral Estimation • Speech • Formant estimation (for speech recognition) • Speech coding or compression • Radar and Sonar • Source localization with sensor arrays • Synthetic aperture radar imaging and feature extraction • Electromagnetics • Resonant frequencies of a cavity • Communications • Code-timing estimation in DS-CDMA systems

  9. Power Spectrum • Deterministic signal x(t) • Assume Fourier transformX(f)exists • Power spectrum is square of absolute value of magnitude response (phase is ignored) • Multiplication in Fourier domain is convolution in time domain • Conjugation in Fourier domain is reversal and conjugation in time autocorrelation

  10. x(t) 1 0 Ts t rx(t) Ts -Ts Ts t Autocorrelation • Autocorrelation ofx(t): • Slide x(t)against x*(t)instead of flip-and-slide • Maximum value at rx(0) if rx(0) is finite • Even symmetric, i.e. rx(t) = rx(-t) • Discrete-time: • Alternate definition:

  11. Power Spectrum • Estimate spectrum if signal known at all time • Compute autocorrelation • Compute Fourier transform of autocorrelation • Autocorrelation of random signal n(t) • For zero-mean Gaussian random processn(t)with variances2

  12. ESTIMATION • Method of Moment Estimation (MME) • Ordinary Least Squares (OLS) Estimation • Maximum Likelihood Estimation (MLE) • Least Squares Estimation • Conditional • Unconditional

  13. THE METHOD OF MOMENT ESTIMATION • It is also known as Yule-Walker estimation. Easy but not efficient estimation method. Works for only AR models for large n. • BASIC IDEA: Equating sample moment(s) to population moment(s), and solve these equation(s) to obtain the estimator(s) of unknown parameter(s).

  14. THE METHOD OF MOMENT ESTIMATION • Let nis the variance/covariance matrix of X with the given parameter values. • Yule-Walker for AR(p): Regress Xtonto Xt−1, . . ., Xt−p. • Durbin-Levinson algorithm with replaced by . • Yule-Walker for ARMA(p,q): Method of moments. Not efficient.

  15. THE YULE-WALKER ESTIMATION • For a stationary (causal) AR(p)

  16. THE YULE-WALKER ESTIMATION • To find the Yule-Walker estimators, we are using, • These are forecasting equations. • We can use Durbin-Levinson algorithm.

  17. THE YULE-WALKER ESTIMATION • If • If {Xt} is an AR(p) process, Hence, we can use the sample PACF to test for AR order, and we can calculate approximate confidence intervals for the parameters.

  18. THE YULE-WALKER ESTIMATION • If Xt is an AR(p) process, and n is large, • 100(1)% approximate confidence interval for j is

  19. THE YULE-WALKER ESTIMATION • AR(1) Find the MME of . It is known that 1 = .

  20. THE YULE-WALKER ESTIMATION • So, the MME of  is • Also, is unknown. • Therefore, using the variance of the process, we can obtain MME of .

  21. THE YULE-WALKER ESTIMATION

  22. THE YULE-WALKER ESTIMATION • AR(2) Find the MME of all unknown parameters. • Using the Yule-Walker Equations

  23. THE YULE-WALKER ESTIMATION • So, equate population autocorrelations to sample autocorrelations, solve for 1 and 2.

  24. THE YULE-WALKER ESTIMATION Using these we can obtain the MME of To obtain MME of , use the process variance formula.

  25. THE YULE-WALKER ESTIMATION • AR(1) • AR(2)

  26. THE YULE-WALKER ESTIMATION • MA(1) • Again using the autocorrelation of the series at lag 1, Choose the root so that the root satisfying the invertibility condition

  27. THE YULE-WALKER ESTIMATION • For real roots, If , unique real roots but non-invertible. If , no real roots exists and MME fails. If , unique real roots and invertible.

  28. THE YULE-WALKER ESTIMATION • This example shows that the MMEs for MA and ARMA models are complicated. • More generally, regardless of AR, MA or ARMA models, the MMEs are sensitive to rounding errors. They are usually used to provide initial estimates needed for a more efficient nonlinear estimation method. • The moment estimators are not recommended for final estimation results and should not be used if the process is close to being nonstationary or noninvertible.

  29. THE MAXIMUM LIKELIHOOD ESTIMATION • Assume that • By this assumption we can use the joint pdf instead of which cannot be written as multiplication of marginal pdfs because of the dependency between time series observations.

  30. MLE METHOD • For the general stationary ARMA(p,q) model or

  31. MLE • The joint pdf of (a1,a2,…, an) is given by • Let Y=(Y1,…,Yn) and assume that initial conditions Y*=(Y1-p,…,Y0)’and a*=(a1-q,…,a0)’ are known.

  32. MLE • The conditional log-likelihood function is given by Initial Conditions:

  33. MLE • Then, we can find the estimators of =(1,…,p), =(1,…, q) and  such that the conditional likelihood function is maximized. Usually, numerical nonlinear optimization techniques are required. After obtaining all the estimators, where d.f.=  of terms used in SS   of parameters = (np)  (p+q+1) = n  (2p+q+1).

  34. MLE • AR(1)

  35. MLE The Jacobian will be

  36. MLE • Then, the likelihood function can be written as

  37. MLE • Hence, • The log-likelihood function:

  38. MLE • Here, S*() is the conditional sum of squares and S() is the unconditional sum of squares. • To find the value of  where the likelihood function is maximized, • Then,

  39. MLE • If we neglect ln(12), then MLE=conditional LSE. • If we neglect both ln(12) and , then

  40. MLE • Asymptotically unbiased, efficient, consistent, sufficient for large sample sizes but hard to deal with joint pdf.

  41. CONDITIONAL LEST SQUARES ESTIMATION • AR(1)

  42. CONDITIONAL LSE • If the process mean is different than zero

  43. CONDITIONAL LSE • MA(1) • Non-linear in terms of parameters • LS problem • S*() cannot be minimized analytically • Numerical nonlinear optimization methods like Newton-Raphson or Gauss-Newton,... *There are similar problem is ARMA case.

  44. UNCONDITIONAL LSE • This nonlinear in . • We need nonlinear optimization techniques.

  45. BURG METHOD • An order-recursive least-squares lattice method , based on the minimization of the forward and backward errors in linear predictors, with the constraint that the AR parameters satisfy the Levinson – Durbin recursion.

  46. BURG METHOD • To derive the estimator, let the given data be x(n), n = 0, 1,………N-1 and let the forward and backward linear prediction estimates of order ‘m’ , be :-

  47. BURG METHOD Forward error, Backward error, The least squares error is :-

  48. BURG METHOD • This error is to be minimized by selecting the prediction coefficients , subject to the constraint that they satisfy the Levinson- Durbin recursion given by :- where is the mth reflection coefficient in the lattice filter realization.

  49. BURG METHOD • The forward and backward prediction errors in terms of • reflection coefficients is given by : By substituting above equation into Levinson – Durbin Recursion and performing minimization w.r.t. reflection Coefficient ,we get :

  50. BURG METHOD • is an estimate of the cross correlation between the forward and backward prediction errors. As the denominator term is simply the least- squares estimate of the forward and backward errors, , so • is an estimate of the total squared error .

More Related