1 / 20

Reduced-Rank Parameter Estimation Techniques

Reduced-Rank Parameter Estimation Techniques. Dr. Rodrigo C. de Lamare (av M ø re) Lecturer in Communications Communications Research Group University of York Visiting Professor at UNIK. Outline. Introduction Historical overview on reduced-rank methods System model and rank reduction

Télécharger la présentation

Reduced-Rank Parameter Estimation Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reduced-Rank Parameter Estimation Techniques Dr. Rodrigo C. de Lamare (av Møre) Lecturer in Communications Communications Research Group University of York Visiting Professor at UNIK

  2. Outline • Introduction • Historical overview on reduced-rank methods • System model and rank reduction • Eigen-decomposition techniques • Multistage Wiener Filter • A new adaptive decimation and interpolation scheme • Applications, perspectives and future work • Concluding remarks

  3. Introduction • General parameter estimation with MMSE or LS criteria: • w = R-1p, where w is a parameter vector with N coefficients, r(i) is the observed data, R=E[r(i)rH(i)] is the covariance matrix, p = E[b*(i)r(i)] and b(i) is the desired signal • Problems when the number of elements for estimation N is large (the bulk of current and future applications): • high complexity: inversion of N x N matrix R – O(N3) • poor convergence performance • Solution -> reduce the number of elements in the filter • Undermodelling ? -> designer has to select the key features of r(i) -> reduce-rank signal processing

  4. Historical overview of reduced-rank methods • Goals of reduced-rank techniques: • To reduce the number of estimation elements • To improve convergence for short data record (small amount of training) situations • To provide amenable adaptive implementation • Origins of reduced-rank methods: • 1987 - Louis Scharf from University of Colorado defined the problem as “a transformation in which a data vector can be represented by a reduced number of effective features and yet retain most of the intrinsic information of the input data”. • 1987- Scharf - Investigation and establishment of the bias versus noise trade-off. • Early 1990s: Eigen-decomposition techniques: require computationally expensive SVD or algorithms to obtain the eigenvalues and eigenvectors.

  5. Historical overview of reduced-rank methods (cont.) • 1997 – Goldstein and Reed from University of Southern California: cross-spectral approach for the selection of singular values • 1997 – Pados and Batallama from University of New York at Buffalo: auxiliary vector filtering (AVF) algorithm – does not require SVD. • 1998/9 - Partial despreading (PD) of Singh and Milstein from University of California at San Diego: simple but suboptimal and restricted to CDMA multiuser detection • 1997 - 2004 - Multistage Wiener filter (MWF) of Goldstein, Reed and Scharf and its variants– state-of-the-art in the field and benchmark • 2004- de Lamare and Sampaio-Neto: interpolated FIR filters with time-varying interpolators -> low complexity, good performance but rank limited. • New approach: 2005 – de Lamare and Sampaio-Neto - Adaptive interpolation and decimation scheme: Best known scheme, flexible, smallest complexity in the field, being patented.

  6. System model and rank reduction • Let us consider a discrete-time signal organised in data vectors, where r(i) is the observed data with N samples at time instant (i) • A general reduced-rank version of r(i) can be obtained with a projection matrix SD(i) with dimension N x D, where D is the rank. • The resulting reduced-rank observed data is given by r(i) = SD H(i)r(i) where r(i) is a D x 1 vector. • Challenge: how to efficiently (or optimally) design SD(i)?

  7. Eigen-decomposition techniques • Rank reduction is accomplished by singular value decomposition on the covariance matrix R= VΛVH, where V = [v1 ... vN] and Λ=diag(λ1, ..., λN) • Early techniques: selection of eigenvectors vj (j=1,...,N) corresponding to the largest eigenvalues λj.-> Projection matrix is SD(i) = [v1 ... vD] • Cross-spectral approach of Goldstein and Reed: choose eigenvectors that minimise the design criterion -> Projection matrix is SD(i) = [vi ... vt] • Problem: these schemes require SVD with complexity O(N3) • Complexity reduction: adaptive subspace tracking algorithms (popular in the end of the 90s) but still complex and susceptible to tracking problems

  8. Multistage Wiener Filter • Rank reduction is accomplished by a successive refinement procedure that generates a set of basis vectors, i.e. the signal subspace, known in numerical analysis as the Krylov subspace. • Design: use of nested filters cj (j=1,...N) and blocking matrices Bj for the decomposition.-> Projection matrix is SD(i) = [p, Rp, ...,RD-1p] • Advantages: rank D does not scale with system size, very fast convergence • Problems: complexity slightly inferior to RLS algorithms

  9. New method: Diversity combined decimation and interpolation scheme • Rank reduction is accomplished by adaptive interpolation and decimation of the input data r(i). • Projection matrix is SD(i) = D(i) V(i), where V(i) is an N x N convolution matrix constructed with interpolation filter v(i) with NI taps (NI=3,4 taps) and D(i) is an adaptive decimation matrix with dimension N x D that discards samples. • Highlights: rank D does not scale with system size, very fast convergence, best known method, very simple.

  10. Description of the proposed method and processing stages • Interpolated Nx1 received data: rI(i) = V H (i) r(i) • Decimated N/L x 1 received data for branch b: • Selection of Decimation Branch D(i) : minimum Euclidean distance • Estimate of N/L x 1 reduced-rank filter w(i): • Joint optimisation of interpolator v(i), decimator D(i) and reduced-rank filter w(i)

  11. Adaptive implementation of the proposed method: stochastic gradient (or LMS) version • Expression of estimate as a function of v(i), D(i) and w(i) • Design based on the cost function • Decimation schemes: Optimal, Uniform, Random, Pre-Stored • For each data vector i=1,…,Q do: • Initialise all parameter vectors and select decimation technique • Select decimation branch that minimises • Estimate parameters

  12. Complexity of the new adaptive decimation and interpolation scheme

  13. Proposed reduce-rank scheme applied to a typical communications problem: DS-CDMA interference suppression • Linear interference suppression in DS-CDMA systems • DS-CDMA systems: multiple signals are spread by a fator N with unique codes for each that protect them from channel effects and enable their extraction at the receiver. • We consider K users and channels with Lp paths (delayed copies of the signal • Interference problems: codes are not orthogonal leading to multiuser interference and multiple delayed copies of the signal creates intersymbol interference • Estimates x(i) = wH(i)r(i) • Symbol detection for BPSK (0s or 1s) transmitted:

  14. Performance in terms of mean squared error (MSE) • M=N+Lp-1 processing elements instead of N (spreading factor), N=64, K=20 users, SNR= Eb/N0=15 dB, typical mobile fading channel with fdT=0.00025 and data record (Q) = 800

  15. Performance in terms of bit error rate (BER) • B=12 branches, D=4 (4 taps), M=N+Lp-1

  16. Performance in terms of bit error rate (BER) • Data record (Q) = 1500 symbols/data vectors

  17. Applications, perspectives and future work • Applications: interference suppression, beamforming, channel estimation, echo cancellation, target tracking, signal compression, speech coding and recognition, control, seismology and bio-inspired systems, etc. • Perspectives: • Work in this field is unexplored in Europe. Only a few groups in France, Germany and Italy are “using” the MWF. • There is room for the new groups to take part in this field. • Future work: • Information theoretic study of very large observation data: performance limits as N goes to infinity. • Investigation of blind reduced-rank estimation schemes. • Development of vector parameter estimates as opposed to current scalar parameter estimation of existing methods

  18. Concluding remarks • Reduced-rank signal processing is a set of powerful techniques that allow the processing of very large data vectors, enabling a substantial reduction in training and requiring low computational requirements. • A historical overview of this promising area has been given by reviewing some of the most important techniques so far reported. • A survey on eigen-decomposition methods and the MWF was presented along with some critical comments on their suitability for practical use. • A new reduced-rank scheme that employs joint adaptive interpolation and decimation was briefly described and appears to be the best known method in this field. • Several applications have been envisaged as well as a number of future investigation topics.

  19. Questions?

  20. Tusen Takk!! Contact: Dr R C de Lamare Communications Research Group University of York Website: http://www.elec.york.ac.uk/comms/people/rodrigodelamare.html E-mail: rcdl500@ohm.york.ac.uk Or UNIK E-mail:rcdelamare@unik.no

More Related