1 / 26

Single Image Super-Resolution Using Sparse Representation

*. Single Image Super-Resolution Using Sparse Representation. Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel. * Joint work with Roman Zeyde and Matan Protter. Image Super-Resolution Using Sparse Representation

aira
Télécharger la présentation

Single Image Super-Resolution Using Sparse Representation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. * Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel • * Joint work with Roman Zeyde • and MatanProtter Image Super-Resolution Using Sparse Representation By: Michael Elad

  2. The Super-Resolution Problem WGN v + Blur (H) and Decimation (S) Our Task: Reverse the process – recover the high-resolution image from the low-resolution one Image Super-Resolution Using Sparse Representation By: Michael Elad

  3. Single Image Super-Resolution ? Recovery Algorithm • The reconstruction process should rely on: • The given low-resolution image • The knowledge of S, H, and statistical properties of v, and • Image behavior (prior). • In our work: • We use patch-based sparse and redundant representation based prior, and • We follow the work by Yang, Wright, Huang, and Ma [CVPR 2008, IEEE-TIP – to appear], proposing an improved algorithm. Image Super-Resolution Using Sparse Representation By: Michael Elad

  4. Core Idea (1) - Work on Patches We interpolate the low-res. Image in order to align the coordinate systems Interp. Q Every patch from should go through a process of resolution enhancement. The improved patches are then merged together into the final image (by averaging). Enhance Resolution Image Super-Resolution Using Sparse Representation By: Michael Elad

  5. Core Idea (2) – Learning [Yang et. al. ’08] ? Enhance Resolution We shall perform a sparse decomposition of the low-res. patch, w.r.t. a learned dictionary We shall construct the high res. patch using the same sparse representation, imposed on a dictionary and now, lets go into the details … Image Super-Resolution Using Sparse Representation By: Michael Elad

  6. The Sparse-Land Prior [Aharon & Elad, ’06] m Extraction of a patch from yh in location k is performed by Position k Model Assumption: Every such patch can be represented sparsely over the dictionary Ah: A n h Image Super-Resolution Using Sparse Representation By: Michael Elad

  7. Low Versus High-Res. Patches Position k Interpolation Q ? • L = blur + decimation+ interpolation. • This expression relates the low-res. patch to the high-res. one, based on Image Super-Resolution Using Sparse Representation By: Michael Elad

  8. Low Versus High-Res. Patches We interpolate the low-res. Image in order to align the coordinate systems Position k Interpolation qk is ALSO the sparse representation of the low-resolution patch, with respect to the dictionary LAh. Image Super-Resolution Using Sparse Representation By: Michael Elad

  9. Training the Dictionaries – General Training pair(s) These will be used for the dictionary training Pre-process (low-res) We obtain a set of matching patch-pairs Interpolation Pre-process (high-res) Image Super-Resolution Using Sparse Representation By: Michael Elad

  10. Alternative: Bootstrapping The given image to be scaled-up Simulate the degradation process WGN v + Blur (H) and Decimation (S) Use this pair of images to generate the patches for the training Image Super-Resolution Using Sparse Representation By: Michael Elad

  11. Pre-Processing High-Res. Patches Low-Res High-Res Interpolation - Patch size: 9×9 Image Super-Resolution Using Sparse Representation By: Michael Elad

  12. Pre-Processing Low-Res. Patches Low-Res Dimensionality Reduction By PCA Interpolation We extract patches of size 9×9 from each of these images, and concatenate them to form one vector of length 324 * [0 1 -1] * [0 1 -1]T * [-1 2 -1] Patch size: ~30 * [-1 2 -1]T Image Super-Resolution Using Sparse Representation By: Michael Elad

  13. Training the Dictionaries: K-SVD 40 iterations, L=3 m=1000 For an image of size 1000×1000 pixels, there are ~12,000 examples to train on N=30 Given a low-res. Patch to be scaled-up, we start the resolution enhancement by sparse coding it, to find qk: Remember Image Super-Resolution Using Sparse Representation By: Michael Elad

  14. Training the Dictionaries: And then, the high-res. Patch is obtained by Thus, Ah should be designed such that Given a low-res. Patch to be scaled-up, we start the resolution enhancement by sparse coding it, to find qk: Remember Image Super-Resolution Using Sparse Representation By: Michael Elad

  15. Training the Dictionaries: • However, this approach disregards the fact that the resulting high-res. patches are not used directly as the final result, but averaged due to overlaps between them. • A better method (leading to better final scaled-up image) would be – Find Ah such that the following error is minimized: The constructed image Image Super-Resolution Using Sparse Representation By: Michael Elad

  16. Overall Block-Diagram Training the high-res. dictionary Training the low-res. dictionary On-line or off-line The recovery algorithm (next slide) Image Super-Resolution Using Sparse Representation By: Michael Elad

  17. The Super-Resolution Algorithm Interpolation * [0 1 -1] Extract patches Sparse coding using Aℓto compute Multiply by Ah and combine by averaging the overlaps * [0 1 -1]T multiply by B to reduce dimension * [-1 2 1] * [-1 2 1]T Image Super-Resolution Using Sparse Representation By: Michael Elad

  18. Relation to the Work by Yang et. al. We interpolate to avoid coordinate ambiguity We use a direct approach to the training of Ah Training the high-res. dictionary Training the low-res. dictionary We use OMP for sparse-coding We can train on the given image or a training set We train on a difference-image We train using the K-SVD algorithm The recovery algorithm (previous slide) We reduce dimension prior to training Bottom line: The proposed algorithm is much simpler, much faster, and, as we show next, it also leads to better results We avoid post-processing Image Super-Resolution Using Sparse Representation By: Michael Elad

  19. Results (1) – Off-Line Training The training image: 717×717 pixels, providing a set of 54,289 training patch-pairs. Image Super-Resolution Using Sparse Representation By: Michael Elad

  20. Results (1) – Off-Line Training Ideal Image SR Result PSNR=16.95dB Bicubic interpolation PSNR=14.68dB Given Image Image Super-Resolution Using Sparse Representation By: Michael Elad

  21. Results (2) – On-Line Training Given image Scaled-Up (factor 2:1) using the proposed algorithm, PSNR=29.32dB (3.32dB improvement over bicubic) Image Super-Resolution Using Sparse Representation By: Michael Elad

  22. Results (2) – On-Line Training The Original Bicubic Interpolation SR result Image Super-Resolution Using Sparse Representation By: Michael Elad

  23. Results (2) – On-Line Training The Original Bicubic Interpolation SR result Image Super-Resolution Using Sparse Representation By: Michael Elad

  24. Comparative Results – Off-Line Image Super-Resolution Using Sparse Representation By: Michael Elad

  25. Comparative Results - Example Ours Yang et. al. Image Super-Resolution Using Sparse Representation By: Michael Elad

  26. Summary Single-image scale-up – an important problem, with many attempts to solve it in the past decade. The rules of the game: use the given image, the known degradation, and a sophisticated image prior. We introduced modifications to Yang’s work, leading to a simple, more efficient alg. with better results. Yang et. al. [2008] – a very elegant way to incorporate sparse & redundant representation prior More work is required to improve further the results obtained. Image Super-Resolution Using Sparse Representation By: Michael Elad

More Related