1 / 23

Shriram Sarvotham Dror Baron Richard Baraniuk

Measurements and Bits: Compressed Sensing meets Information Theory. Shriram Sarvotham Dror Baron Richard Baraniuk. ECE Department Rice University dsp.rice.edu/cs. CS encoding. Replace samples by more general encoder based on a few linear projections (inner products)

kareem
Télécharger la présentation

Shriram Sarvotham Dror Baron Richard Baraniuk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror BaronRichard Baraniuk ECE DepartmentRice University dsp.rice.edu/cs

  2. CS encoding • Replace samples by more general encoderbased on a few linear projections (inner products) • Matrix vector multiplication sparsesignal measurements # non-zeros

  3. The CS revelation – • Of the infinitely many solutions seek the onewith smallest L1 norm

  4. The CS revelation – • Of the infinitely many solutions seek the onewith smallest L1 norm • If then perfect reconstructionw/ high probability [Candes et al.; Donoho] • Linear programming

  5. Compressible signals • Polynomial decay of signal components • Recovery algorithms • reconstruction performance: • also requires • polynomial complexity (BPDN) [Candes et al.] • Cannot reduce order of [Kashin,Gluskin] constant squared of best term approximation

  6. Fundamental goal: minimize • Compressed sensing aims to minimize resource consumption due to measurements • Donoho: “Why go to so much effort to acquire all the data when most of what we get will be thrown away?”

  7. Measurement reduction for sparse signals • Ideal CS reconstruction of -sparse signal • Of the infinitely many solutions seek sparsest one • If M · K then w/ high probability this can’t be done • If M ¸ K+1 then perfect reconstructionw/ high probability [Bresler et al.; Wakin et al.] • But not robust and combinatorial complexity number of nonzero entries

  8. Why is this a complicated problem?

  9. Rich design space • What performance metric to use? • Wainwright: determine support set of nonzero entries • this is distortion metric • but why let tiny nonzero entries spoil the fun? • metric? ? • What complexity class of reconstruction algorithms? • any algorithms? • polynomial complexity? • near-linear or better? • How to account for imprecisions? • noise in measurements? • compressible signal model?

  10. How many measurements do we need?

  11. Measurement noise • Measurement process is analog • Analog systems add noise, non-linearities, etc. • Assume Gaussian noise for ease of analysis

  12. Setup • Signal is iid • Additive white Gaussian noise • Noisy measurement process

  13. Measurement and reconstruction quality • Measurement signal to noise ratio • Reconstruct using decoder mapping • Reconstruction distortion metric • Goal: minimize CS measurement rate

  14. Measurement channel • Model process as measurement channel • Capacity of measurement channel • Measurements are bits!

  15. Main result • Theorem: For a sparse signal with rate-distortion function , lower bound on measurement rate subject to measurement quality and reconstruction distortion satisfies • Direct relationship to rate-distortion content

  16. Main result • Theorem: For a sparse signal with rate-distortion function , lower bound on measurement rate subject to measurement quality and reconstruction distortion satisfies • Proof sketch: • each measurement provides bits • information content of source bits • source-channel separation for continuous amplitude sources • minimal number of measurements • Obtain measurement rate via normalization by

  17. Example • Spike process - spikes of uniform amplitude • Rate-distortion function • Lower bound • Numbers: • signal of length 107 • 103 spikes • SNR=10 dB  • SNR=-20 dB 

  18. Upper bound (achievable) in progress…

  19. CS reconstruction meets channel coding

  20. Why is reconstruction expensive? Culprit: dense, unstructured sparsesignal measurements nonzeroentries

  21. Fast CS reconstruction • LDPC measurement matrix (sparse) • Only 0/1 in • Each row of contains randomly placed 1’s • Fast matrix multiplication  fast encoding and reconstruction sparsesignal measurements nonzeroentries

  22. Ongoing work: CS using BP [Sarvotham et al.] • Considering noisy CS signals • Application of Belief Propagation • BP over real number field • sparsity is modeled as prior in graph • Low complexity • Provable reconstruction with noisy measurements using • Success of LDPC+BP in channel coding carried over to CS!

  23. Summary • Determination of measurement rates in CS • measurements are bits: each measurement provides bits • lower bound on measurement rate • direct relationship to rate-distortion content • Compressed sensing meets information theory • Additional research directions • promising results with LDPC measurement matrices • upper bound (achievable) on number of measurements dsp.rice.edu/cs

More Related