1 / 19

Channel State Estimation in Rapidly Time-Varying Environments Michael Larsen, A. Lee Swindlehurst

Channel State Estimation in Rapidly Time-Varying Environments Michael Larsen, A. Lee Swindlehurst Brigham Young University Haran Arasaratnam, Simon Haykin McMaster University MURI Annual Review September 14, 2006. Background. Joint work between BYU & McMaster University

zoltin
Télécharger la présentation

Channel State Estimation in Rapidly Time-Varying Environments Michael Larsen, A. Lee Swindlehurst

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Channel State Estimation in Rapidly Time-Varying Environments Michael Larsen, A. Lee Swindlehurst Brigham Young University Haran Arasaratnam, Simon Haykin McMaster University MURI Annual Review September 14, 2006

  2. Background • Joint work between BYU & McMaster University • Addresses key impediment to use of spatial diversity or • spatial multiplexing in wireless networks: mobility • Conventional training & channel estimation schemes are • inappropriate for highly mobile channels – one • channel estimate per block is not good enough • Channel estimates must evolve @ the symbol rate • Two algorithms developed: • Modified Particle Filtering (MPF) • Multiple-Pass Conjugate Gradient (MPCG)

  3. Pilot-Assisted Training (PAT) • Conventional Time-Division Multiplexed Training • Known training preamble used to determine the channel for subsequent data decoding • Assumes channel is constant over data frame T Data Known Training Symbols Data Symbols Superimposed (Pilot-Embedded) Training • Training symbols are “embedded” with the data using orthogonal projections • Orthogonal projections allow the channel estimates and data to be easily separated • Good choice of projection results in better channel estimate for time-varying channels • However, still produces only one channel estimate per frame Training Data

  4. Frame-Based Channel Estimation • Channel measurements suggest changes occur within fractions of a wavelength of motion • In such cases ,“One-estimate-per-frame” is insufficient • Increased training frequency dramatically reduces throughput • Estimation (tracking) schemes needed that provide (nearly) symbol-by-symbol channel estimates

  5. Proposed Solutions • Goal: update channel estimate at (nearly) every symbol time • Two proposed methods: • Time-division multiplexed channel estimation and tracking using modified particle filtering • Multiple-pass superimposed training scheme using the conjugate gradient (CG) algorithm • Both methods seek to maximize approximate posterior distribution given prior distribution based on 1st-order AR channel model:

  6. Posterior ChannelDistribution • Recursive expression for posterior distribution: • For stationary Gaussian measurement and process noise, standard solution leads to Kalman Filter • Alternative, data-dependent solutions are needed under “real” operating conditions • Idea: use decoded symbols (as in decision-directed methods to update both the channel and its distribution.

  7. Particle Filtering • Iterative Monte Carlo sampling scheme for approximating posterior distribution of Markov processes • A type of Markov Chain Monte Carlo (MCMC) method • Particles = samples of a given probability distribution Weights = quantifies importance of particles • Particles are propagated though the Markov model to adaptively estimate the distribution over time

  8. Modified Particle Filtering (MPF) • Sampling function (prior) must be chosen to generate new particles • Sampling functions work best when they closely approximate the distribution to be estimated • For the MPF scheme, the sampling function is chosen to be the conditional prior (based on our channel process model) • Performance is accelerated by using the decision-directed ML channel estimate to move the particles towards the posterior distribution Posterior Modified sampling function Prior (sampling function) Likelihood function

  9. MPF Algorithm • Obtain sample according to • When no training data is available, obtain coarse channel estimate using prior channel estimate and AR channel model: • Decode transmitted symbols using the coarse channel estimate. • Find ML channel estimate using decoded symbols. • Modify particles using ML estimate: • Update importance weights and renormalize. • Perform particle filtering to produce a refined channel estimate. • Re-decode transmitted symbols using the refined channel estimate. • If necessary, repeat steps 4-8 to improve performance.

  10. Multiple-Pass Conjugate Gradient (MPCG) Method Instead of time-division multiplexed training, MPCG uses superimposed training. With a stationary channel, the model for block k is: However, if the channel coherence time is on the order of the symbol rate, we must use the following symbol-by-symbol model: In what follows, we drop the block index k for simplicity.

  11. Joint Posterior Distribution As with MPF, MPCG uses an estimate of the posterior distribution of the channel, except it is the joint distribution for all symbols over one frame: where:

  12. Approximate Solution • Impractical to optimize over exact posterior distribution • Instead, we use the following simplified version: • Solution amounts to solving bilinear coupled equations: • Can use standard alternating least-squares (ALS), but solving for • channel h is prohibitively expensive. • Our approach is to exploit AR model in the conjugate gradient • algorithm for channel estimation step.

  13. Simulation Example • 2 transmit antennas, 3 receive antennas • Channel simulated using modified Jakes’ model with Doppler rate of 0.009 • Using Gaussian and Middleton Class-A noise with =100 and  = 0.1 • Data is transmitted at the same bit rate for both methods • For MPF, an Alamouti code is used with 16-PSK symbols • For MPCG, symbols are uncoded 4-PSK. • There is a 1:16 ratio of training symbols to total symbols (5:64 ratio for MPCG) • AWGN SNR is 20 dB

  14. Sample Performance Results

  15. Symbol Error Rate Comparison

  16. Benefit of H-ARQ-Like Temporal Diversity • MPCG errors generally result from algorithm convergence to a local maximum. • Discarding (re-transmitting) a small number of “bad” frames can greatly improve performance.

  17. Computational Complexity • For MPF, complexity determined by number of particles • For MPCG, complexity determined by number of passes over frame • Circles on plots show operating points for previous simulation • For 10 Mb/sec data rate, algorithms require between 10-30 Gops

  18. Results and Conclusions • Both MPF and MPCG methods find approximate MAP channel estimate at every symbol time • Performance is dramatically improved when compared with static pilot-assisted training • MPF typically requires more computation, but can exploit space-time coding and has no high-SNR error floor • MPCG provides better performance for given computational load, but suffers from high-SNR floor due to MAP approximations and resulting mis-convergence • Error floor can be pushed lower through use of temporal diversity (H-ARQ-like retransmissions) • For both algorithms, required computational load is the primary drawback

  19. Future Work • Modifications or new algorithms with lower cost • Extend MPCG approach to exploit structure present in • space-time codes • Extensions to non-spatially-white interference (e.g., joint • estimation of channel and interference statistics • Algorithm operation for OFDM systems – exploiting • correlation in the frequency domain • Algorithm operation for W-CDMA systems – exploiting • structure & correlation in the time domain

More Related