1 / 29

Michael Elad The Computer Science Department The Technion – Israel Institute of technology

David L. Donoho Statistics Department Stanford USA. Jean-Luc Starck CEA - Service d’Astrophysique CEA-Saclay France. Image Decomposition and Inpainting By Sparse & Redundant Representations. Michael Elad

rane
Télécharger la présentation

Michael Elad The Computer Science Department The Technion – Israel Institute of technology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. David L. Donoho Statistics Department Stanford USA Jean-Luc Starck CEA - Service d’Astrophysique CEA-Saclay France Image Decomposition and Inpainting By Sparse & Redundant Representations Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel

  2. General • Sparsity and over-completeness have important roles in analyzing and representing signals. • Today we discuss the image separation task (image=cartoon + texture) and inpainting based on it. • Our approach – Global treatment of these problems. • Why decompose an image? Because treating each part separately leads to better results. E.g. • Edge detection • Inpainting • Compression Sparse representations for Image Decomposition

  3. Separation Sparse representations for Image Decomposition

  4. Given a signal s , we are often interested in its representation (transform) as a linear combination of ‘atoms’ from a given dictionary: L  = s N  • Among those possibilities, we consider the sparsest. Atom (De-) Composition • If the dictionary is over-complete (L>N), there are numerous ways to obtain the ‘atom-decomposition’. Sparse representations for Image Decomposition

  5. Family of Cartoon images Our Inverse Problem Given s, find its building parts and the mixture weights Our Assumption Family of Texture images Decomposition – Definition Sparse representations for Image Decomposition

  6. L x = x = N x is chosen such that the representation of are non-sparse: x is chosen such that the representation of are sparse: Use of Sparsity We similarly construct y to sparsify Y’s while being inefficient in representing the X’s. Sparse representations for Image Decomposition

  7. Training, e.g. Choice of Dictionaries • Educated guess: texture could be represented by local overlapped DCT, and cartoon could be built by Curvelets/Ridgelets/Wavelets (depending on the content). • Note that if we desire to enable partial support and/or different scale, the dictionaries must have multiscale and locality properties in them. Sparse representations for Image Decomposition

  8. y x = + Decomposition via Sparsity Why should this work? Sparse representations for Image Decomposition

  9. y x = + • If , it is necessarily the sparsest one possible, and it will be found. Uniqueness Rule - Implications • For dictionaries effective in describing the ‘cartoon’ and ‘texture’ contents, we could say that the decomposition that leads to separation is the sparsest one possible. Sparse representations for Image Decomposition

  10. Recent results [Tropp 04’, Donoho et.al. 04’]show that the noisy case generally meets similar rules of uniqueness and equivalence Noise Considerations Forcing exact representation is sensitive to additive noise and model mismatch Sparse representations for Image Decomposition

  11. Artifacts Removal We want to add external forces to help the separation succeed, even if the dictionaries are not perfect Sparse representations for Image Decomposition

  12. Complexity Instead of 2N unknowns (the two separated images), we have 2L»2N ones. Define two image unknowns to be and obtain … Sparse representations for Image Decomposition

  13. Simplification Justifications Heuristics: (1) Bounding function; (2) Relation to BCR; (3) Relation to MAP. Theoretic: See recent results by D.L. Donoho. Sparse representations for Image Decomposition

  14. Algorithm • An algorithm was developed to solve the above problem: • It iterates between an update of sx to update of sy. • Every update (for either sx or sy) is done by a forward and backward fast transforms – this is the dominant computational part of the algorithm. • The update is performed using diminishing soft-thresholding (similar to BCR but sub-optimal due to the non unitary dictionaries). • The TV part is taken-care-of by simple gradient descent. • Convergence is obtained after 10-15 iterations. Sparse representations for Image Decomposition

  15. Results 1 – Synthetic Case The very low freq. content – removed prior to the use of the separation Original image composed as a combination of texture and cartoon The separated cartoon (spanned by 5 layer Curvelets functions+LPF) The separated texture (spanned by Global DCT functions) Sparse representations for Image Decomposition

  16. Results 2 – Synthetic + Noise Original image composed as a combination of texture, cartoon, and additive noise (Gaussian, ) The residual, being the identified noise The separated cartoon (spanned by 5 layer Curvelets functions+LPF) The separated texture (spanned by Global DCT functions) Sparse representations for Image Decomposition

  17. Results 3 – Edge Detection Edge detection on the original image Edge detection on the cartoon part of the image Sparse representations for Image Decomposition

  18. Separated Cartoon using Curvelets (5 resolution layers) Separated texture using local overlapped DCT (32×32 blocks) Original ‘Barbara’ image Results 4 – Good old ‘Barbara’ Sparse representations for Image Decomposition

  19. Results 4 – Zoom in Zoom in on the result shown in the previous slide (the texture part) The same part taken from Vese’s et. al. The same part taken from Vese’s et. al. Zoom in on the results shown in the previous slide (the cartoon part) Sparse representations for Image Decomposition

  20. Results 5 – Gemini The Cartoon part spanned by wavelets The original image - Galaxy SBS 0335-052 as photographed by Gemini The texture part spanned by global DCT The residual being additive noise Sparse representations for Image Decomposition

  21. Inpainting Sparse representations for Image Decomposition

  22. Inpainting – Core Idea • Assume: the signal shas been created by s=Φα0with very sparse α0. • Missing values in s imply missing rows in this linear system. This can be represented as a multiplication of s=Φα0 by a mask matrix M. • By removing these rows, we get . • Now solve • If α0 was sparse enough, it will be the solution of the above problem! Thus, computing Φα0recovers sperfectly. = M Sparse representations for Image Decomposition

  23. What if some values in s are unknown (with known locations!!!)? The image will be the inpainted outcome. Interesting comparison to [Bertalmio et.al. ’02]– see next Application - Inpainting For separation Sparse representations for Image Decomposition

  24. Application - Inpainting Sparse representations for Image Decomposition

  25. Texture Part Outcome Source Cartoon Part Results 6 - Inpainting Sparse representations for Image Decomposition

  26. Texture Part Outcome Source Cartoon Part Results 7 - Inpainting Sparse representations for Image Decomposition

  27. Outcome Source Results 8 - Inpainting Sparse representations for Image Decomposition

  28. Results 9 - Inpainting Sparse representations for Image Decomposition

  29. Application? Theoretical Justification? Practical issues? Summary Over-complete and Sparsity are powerful in representations of signals Decompose an image to Cartoon+Texture We show theoretical results explaining how could this lead to successful separation. Also, we show that pursuit algorithms are expected to succeed We present ways to robustify the process, and apply it to image inpainting Sparse representations for Image Decomposition

More Related