1 / 14

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone . SIAM Conference on Imaging Science ’08. Michael Elad and Irad Yavneh. Presented by Dehong Liu ECE, Duke University July 24, 2009. Outline. Motivation A mixture of sparse representations

clover
Télécharger la présentation

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Weighted Average of Sparse Representations is Better than the Sparsest One Alone SIAM Conference on Imaging Science ’08 Michael Elad and Irad Yavneh Presented by Dehong Liu ECE, Duke University July 24, 2009

  2. Outline • Motivation • A mixture of sparse representations • Experiments and results • Analysis • Conclusion

  3. Motivation • Noise removal problem y=x+v, in which y is a measurement signal, x is the clean signal, v is assumed to be zero mean iid Gaussian. • Sparse representation x=D, in which DRnm, n<m,  is a sparse vector. • Compressive sensing problem • Orthogonal Matching Pursuit (OMP) Sparsest representation • Question: “Does this mean that other competitive and slightly inferior sparse representations are meaningless?”

  4. A mixture of sparse representations • How to generate a set of sparse representations? • Randomized OMP • How to fuse these sparse representations? • A plain averaging

  5. OMP algorithm

  6. Randomized OMP

  7. Experiments and results • Model: • y=x+v=D+v • D: 100x200 random dictionary with entries drawn from N(0,1), and then with columns normalized; • : a random representations with k=10 non-zeros chosen at random and with values drawn from N(0,1); • v: white Gaussian noise with entries drawn from N(0,1); • Noise threshold in OMP algorithm T=100(??); • Run the OMP once, and the RandOMP 1000 times.

  8. Observations

  9. Sparse vector reconstruction The average representation over 1000 RandOMP representations is not sparse at all.

  10. Denoising factor= Denoising factor based on 1000 experiments Run RandOMP 100 times for each experiment.

  11. Performance with different parameters

  12. Analysis “ ” The RandOMP is an approximation of the Minimum-Mean-Squared-Error (MMSE) estimate.

  13. 0.5 1. Emp. Oracle 0.45 2. Theor. Oracle 3. Emp. MMSE 0.4 4. Theor. MMSE 5. Emp. MAP 0.35 6. Theor. MAP 7. OMP 0.3 8. RandOMP Relative Mean-Squared-Error 0.25 0.2 0.15 0.1 0.05 0 0 0.5 1 1.5 2 s Comparison The above results correspond to a 20x30 dictionary. Parameters: True support=3, x=1, Averaged over 1000 experiments.

  14. Conclusion • The paper shows that averaging several sparse representations for a signal lead to better denoising, as it approximates the MMSE estimator.

More Related