Super-Resolution Through Neighbor Embedding - PowerPoint PPT Presentation

super resolution through neighbor embedding n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Super-Resolution Through Neighbor Embedding PowerPoint Presentation
Download Presentation
Super-Resolution Through Neighbor Embedding

play fullscreen
1 / 47
Super-Resolution Through Neighbor Embedding
430 Views
Download Presentation
jagger
Download Presentation

Super-Resolution Through Neighbor Embedding

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Super-Resolution Through Neighbor Embedding Hong Chang, Dit-Yan Yeung and Yimin Xiong Presented By: Ashish Parulekar, Ritendra Datta, Shiva Kasiviswanathan and Siddharth Pal

  2. Contents • Introduction • What is Super resolution ? • Multiframe superresolution. • Single frame superresolution. • Problem Statement. • Review on Manifold Learning • Experimental Setup • Results • Comments

  3. same resolution higher resolution Definition of Resolution • Resolution  ability to resolve/distinguish two objects • w.r.t image capture devices, resolution is determined by number of sensing elements in the two dimensions • Higher resolution  Larger image size but, larger image size  higher resolution

  4. superresolution algorithm high resolution image LR frames Superresolution • Any algorithm/method which is capable of producing an image with a resolution greater than that of the input • Typically, input is a sequence of low resolution (LR) images (also referred to as frames) • LR frames displaced from each other • Have common region of interest

  5. Super Resolution Algorithms • Multi-frame superresolution • Single-frame superresolution

  6. Multi-frame superresolution • Introduced by Tsai and Huang • Most common approach to superresolution • Exploits information in several LR frames/images to generate a HR image • Kim, Bose and Valenzuela • Recursive wavenumber domain approach • Basic model employed

  7. Multi-frame superresolution • Ur and Gross • Ensemble of spatially shifted observations • Papoulis and Brown generalized sampling theorem • Irani and Peleg – Iterative Back Projection • Initial guess of HR image • Compute LR image estimates using imaging process • Back-project the error to improve HR image • S. Lertrattanapanich • Delaunay triangulation of registered images • Surface approximation

  8. Multi-frame superresolution • Nguyen and Milanfar • Wavelet superresolution • Approximate given points by a sufficiently dense dyadic set • Initially, ignore detail coefficients and obtain regularized least squares solution for approximation coefficients • Use difference between data and approximation to solve for detail coefficients • Reconstruct on HR grid using coefficients

  9. SGWSR 1-D Method • M. Chappalli and N. K. Bose (2004) • HR reconstruction restricted to LR frames on semi-regular grids • Semi-regular grid  tensor product of 1-D arbitrarily irregular grids •  LR frames displaced only by translations • approximates far-field imaging, ex: aerial photography, satellite imaging • Based on 1-D lifting – splitting, prediction and updating • Applied on rows and subsequently on columns • Simultaneous noise reduction using hard and soft thresholding of wavelet coefficients

  10. + odd sj - + sj+1 Pj Uj Uj Pj m u l t i p l e x e r + even dj - + forward transform inverse transform Lifting to construct SGW sj+1

  11. Result of SGWSR 1-D Method • Results DT bilinear interpolation PSNR: 26.4996 DT bicubic interpolation PSNR: 25.6810 sample LR frame SGWSR soft thresholding PSNR: 28.5462 original

  12. Single-frame superresolution • Spatial boundedness & Bandlimitedness – not simultaneously possible • exploited to reconstruct higher frequencies • LSI interpolation cannot generate new information •  nonlinear and shift-varying interpolation is used • Examples • Directional filters, Adaptive interpolation • Edge-preserving interpolation, perceptual edge enhancement • Learning-based methods • Lack of information – major limitation

  13. Problem Formulation Training Xsi Training Ysi ? Testing Xt Testing Yt

  14. Manifold Definition Revisited A manifold is a topological space which is locally Euclidean. Represents a very useful and challenging unsupervised learning problem. In general, any object which is nearly "flat" on small scales is a manifold.

  15. Manifold Learning • Discover low dimensional structures (smooth manifold) for data in high dimension. • Linear Approaches • Principal component analysis. • Multi dimensional scaling. • Non Linear Approaches • Local Linear Embedding • ISOMAP • Laplacian Eigenmap.

  16. Nonlinear Approaches- ISOMAP • Construct neighbourhood graph G. • For each pair of points in G, compute, shortest path distances - geodesic distances. • Construct k-dimensional coordinate vectors Geodesic: Shortest curve along the manifold connecting two points

  17. ISOMAP algorithm Pros/Cons Advantages: • Nonlinear • Globally optimal • Guarantee asymptotically to recover the true dimensionality Drawback: • May not be stable, dependent on topology of data • As N increases, pair wise distances provide better approximations to geodesics, but cost more computation

  18. Local Linear Embedding (a.k.a LLE) • LLE is based on simple geometric intuitions. • Suppose the data consist of N real-valued vectors Xi, each of dimensionality D. • Each data point and its neighbors expected to lie on or close to a locally linear patch of the manifold.

  19. Steps of locally linearembedding: • Reconstruction errors are measured by the cost function • έ(W) =

  20. Steps in LLE algorithm • Assign neighbors to each data point • Compute the weights Wij that best linearly reconstruct the data point from its neighbors, solving the constrained least-squares problem. • Compute the low-dimensional embedding vectors best reconstructed by Wij.

  21. The final step • Each high-dimensional observation is mapped to a low-dimensional vector representing global internal co ordinates.

  22. Fit locally, Think Globally From Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul

  23. Neighbor embedding method 1) For each patch xqtin image Xt: a) Find the set Nqof K nearest neighbors in Xs. b) Compute the reconstruction weights of the neighbors that minimize the error of reconstructing xqt.

  24. Neighbor embedding methodContd…. c) Compute the high-resolution embedding yqt using the appropriate high-resolution features of the K nearest neighbors and the reconstruction weights. 2) Construct the target high-resolution image Yt by enforcing local compatibility and smoothness constraints between adjacent patches obtained in step 1(c).

  25. Equations to fit Minimize local reconstruction error Gram Matrix

  26. Solution to Constrained Least Squares Problem Efficient way 1)Rather than inverting G matrix is to solve the equations Gqwq=1 2)Normailze the weights

  27. Final step Final equation in the reconstruction of high resolution image And finally, Step 2 of the algorithm is achieved by averaging the feature values in overlapped regions between adjacent patches

  28. Problem Formulation Training Xsi Training Ysi ? Testing Xt Testing Yt

  29. Intuition • Patches of the image lie on a manifold Training Xsi Low dimensional Manifold High dimensional Manifold Training Ysi

  30. Algorithm • Get feature vectors for each low resolution training patch. • For each test patch feature vector find K nearest neighboring feature vectors of training patches. • Find optimum weights to express each test patch vector as a weighted sum of its K nearest neighbor vectors. • Use these weights for reconstruction of that test patch in high resolution.

  31. Experimental Setup • Target magnification : 3X • Transform to YIQ space • Feature Selection • First derivative of luminance • Second derivative of luminance • Reconstruction of Y and transfer of I and Q from lower dimension.

  32. Results Experiments with images from the paper. Intuition: Authors’ test set may be biased !

  33. Results Training Xsi Training Ysi Testing Xt Testing Yt

  34. Results Training Xsi Training Ysi Testing Xt Testing Yt

  35. Results Training Xsi Training Ysi Testing Xt Testing Yt

  36. Results Training Ysi Training Xsi Testing Xt Testing Yt

  37. Results K=1 K=2 K=3 K=4 k=5 k=6

  38. Results As expected, good super-resolution images generated. Now for the real test – our images !

  39. Results – their method our data Our Test Image 1/3X Obtained from scaling down Ground Truth Image

  40. Results – their method our data High-resolution training images : (A) Similar (B) Dissimilar images Corresponding low-resolution training image

  41. Results – their method our data We test the results using RMS Error using the following formula: Error = √ (1/n) ∑ ‌|| Pground-truth – Pgenerated || This essentially indicates the average Luminance deviations between corresponding pixels in the Ground-truth and Generated Images. Remember, the Chrominance components I and Q were copied into the generated image without change.

  42. Results – Similar training Image K=1 K=2 K=3 K=4 K=5 K=6

  43. Results – Different training Image K=1 K=2 K=3 K=4 K=5 K=6

  44. Results – The Graph !

  45. Comments on results • Best results when using • Different images: K = 4 or 5 (confirming to what is stated in the paper). • Same images: K =1 (why ?))

  46. Comments • Method worked well with our test data. • Care should be taken about image patches which do NOT form a manifold. • Limitations on magnification. • Size/ Overlap in the patch. • Many extensions are possible: • Diagonal gradients for LR (increased dimensionality) • Using Lumina as feature vector in LR • Can be extended to Video shot SR • Multiple training images

  47. Thank You