1 / 26

Fast Direct Super-Resolution by Simple Functions

Fast Direct Super-Resolution by Simple Functions. Chih -Yuan Yang and Ming- Hsuan Yang 12/14/13. Outline. Introduction Related work Proposed method Experimental results Conclusions. Introduction. Super-Resolution = Predicting pixel intensities From single or multiple image(s)

yadid
Télécharger la présentation

Fast Direct Super-Resolution by Simple Functions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast Direct Super-Resolution by Simple Functions Chih-Yuan Yang and Ming-Hsuan Yang 12/14/13

  2. Outline • Introduction • Related work • Proposed method • Experimental results • Conclusions

  3. Introduction • Super-Resolution = Predicting pixel intensities • From single or multiple image(s) • For spatial or temporal intestines 480 120 320 80

  4. Challenges • Effectiveness • Numerous pixel intensities • Stability • Various image content • Computational load

  5. Related Work - Bicubic Interpolation • Predicting intensities by interpolating nearbypixels • Simple and fast • But over-smooth

  6. Self-Similar Exemplars [Glasner 11] I4 • Predict HR patches through example patches found in a self-generated image pyramid • Sharp and clear edges • Blurred textures • Slow I3 I2 I1 I0 I-1 I-2 I-3

  7. Gradient Profile Prior [Sun08] • Predict intensities by a edge model • Sharp edges, but jaggy • Only effective for edges

  8. Sparse Representation [Yang08] • Predicting HR patch features by learned sparse dictionaries • Rich details • Noise artifacts along edges

  9. Proposed Method • Split LR feature space for effectiveness • Use simple features and simple functions for speed • Exploit a large image set to collect training samples • Asymmetric computational load • slow for training • fast for test

  10. Training Phase (1)Generate LR images • Generate LR images from HR training images • Extract patch pairs in LR and HR • Since there is a convolution for HR images, we cropLR patches affected by predicted HR pixels (blue region)

  11. Training Phase (2)Extract LR/HR features • Features: • LR: LR intensity minus the mean of the LR patches • HR: HR intensity minus the mean of the LR patches • High-frequency information • Cluster the LR training features by K-mean • split the LR feature space into K subspaces

  12. Cluster Centers and Patch Numbers

  13. Training Phase (3)Collect training instances • Randomly select sufficeint LR/HR feature pairs for each cluster • For some cluster centers, use bicubic interpolation if no sufficient instances available

  14. Training Phase (4)Learn regression coefficients • C: coefficients • W: HR features (1000 instances) • V: LR features (1000 instances)

  15. Test Phase • Crop LR patches • Compute patch mean, and extract LR features • Find the closest cluster center • Compute HR features • HR patch intensity = HR feature + LR patch mean • Average overlapped HR patches as output

  16. Experimental Results - Child

  17. Experimental Results - Lena

  18. Experimental Results - IC • No groundtruth image

  19. Experimental Results - Mansion

  20. Experimental Results - Mermaid

  21. Experimental Results - Shore

  22. Experiments on BSD200 dataset

  23. Performance (averaged) and Execution Time for BSD200 Dataset

  24. Conclusions • Clear and sharp edges • Rich texture details • Easy for implementation • Fast to generate SR images

  25. Insight • Split of the LR feature space • K has to be large enough • Exploit a large image set to collect sufficient examples for high-frequency patches • Regression methods make little difference

  26. Code and Dataset Available • https://eng.ucmerced.edu/people/cyang35 • Including code for training phase

More Related