1 / 41

Image Super-resolution Using Statistical Learning

Image Super-resolution Using Statistical Learning. Preliminary Exam Presentation Karl Ni Professor Truong Nguyen Professor Nuno Vasconceles Professor William Hodgkiss. Outline. Problem Description: Image super-resolution Background Information Non-statistical Techniques

Télécharger la présentation

Image Super-resolution Using Statistical Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Super-resolutionUsing Statistical Learning Preliminary Exam Presentation Karl Ni Professor Truong Nguyen Professor Nuno Vasconceles Professor William Hodgkiss

  2. Outline • Problem Description: Image super-resolution • Background Information • Non-statistical Techniques • Statistical Techniques • Classification-based Approaches • Contributions • Regression-based Approach and Rationale • Spatial Domain SVM Superresolution • Frequency Domain SVM Superresolution • Results • Conclusion

  3. Problem Description • Low-resolution image transferred to high resolution • Addition of pixels • Single-frame image superresolution is a way to fill in the missing information for a larger image, specifically what values these pixels take on.

  4. Low Freq Coefficients Zero-pad Low Freq Coefficients All Zeros Frequency Domain Non-statistics-based Interpolation Techniques • B-Splines Methods • Bilinear • Bicubic • Cosine Domain Upscaling: Zero Padding.

  5. Current Statistical Learning Techniques • Instead of blindly guessing or filling in information, we can use prior knowledge included in a training set. • Nearest Neighbor: Freeman, Jones, Pasztor • Expectation Maximization: Atkins and Bouman • Image estimation lowers MSE

  6. On the correlation between known and unknown information • There’s some relationship between every even and every odd components, just as there is some relationship between high frequencies and low frequencies.

  7. Knowledge Base Decision Operations Informed Decision Observations Using past observations for future decisions • Prior knowledge of the values and locations of the missing information. • Exploit this knowledge as a relationship between known and unknown

  8. Application to Machine Perception • Would like pattern recognition for application of knowledge base to observation data • Call input random variables X, the data. • Call associated labels for input random variables Y. • Have pairs: { (x1, y1), (x2, y2), …, (xN, yN), … }

  9. Statistical Learning: (Classification) • Two types of variables: • X : vector of observations (features) in the world • Y : state (class) of the world • X, Y are related by a f: function (unknown)

  10. Knowledge Base: { (x1, y1), (x2, y2), …, (xN, yN)} Decision Operations based on Cost Function h(xobs) Informed Decision = ydec Observations = xobs Statistical Learning Goal • Goal: Make h(x) = f(x) given training data as knowledge base

  11. Application to Super-resolution Pixel Locations • What is the feature set x? • Can be a single pixel • Can be a vector of all the pixels • Can be linear transformation of pixels • Can be kernelized transformation of pixel values • Can be anything! (reasonable) • What is the h(x) that we are trying to learn? • Can be filter coefficients with input x = original pixels • Can be actual pixel values • Can be anything! (reasonable)

  12. Expectation Maximization For Filter Design C.B. Atkins, et. al., 1998, “Classification Based Methods in Optimal Image Interpolation”, PhD Dissertation, Purdue University, West Lafayette, IN, USA

  13. Minimizing the Risk Function (1/2) • We wish to learn the relationship of f(x) = y, and model it the best we can with h(x) • “0-1” loss function: 0, if y = h(x, α) 1, if y ≠ h(x, α) • Minimize the risk, defined as expected loss: R(α) = EX,Y{ L[ y, h(x, α)] } = ∫ PX,Y (x, y) L[y, h(x, α)] dx dy = 0 • PX,Y[y= h(x, α)] + 1 • PX,Y[y ≠ h(x, α)] = PX,Y[y ≠ h(x, α)] L[ y, h(x, α)] =

  14. Minimizing the Risk Function (2/2) • What function h(x, α) minimizes the risk? h* = argminhR (α, x) = argminhEX,Y{ L[ y, h(x, α)] } = argminh PX,Y[y ≠ h(x)] h*(x) = argminh PY|X[y ≠ h(x) | x] = argminh 1 – PY|X[y = h(x) | x ] = argmaxhPY|X[ h(x) | x ] = argmaxiPY|X[ i | x ] • In other words, the optimal value of h(x) = i, given an observation x, is the value which maximizes the posterior PY|X(i | x)

  15. Learning Algorithms • Determine the value i that maximizes PY|X(i | x) • All methods must assume a model • Two Different Philosophies: • Generative Method • Model p(x,y) and use Bayesian rules to calculate p(y|x). • Possibly biased due to the unknown assumptions • Discriminant Method • Model p(y|x) directly and map accordingly. • Possibly highly variable with few data points • Relatively new field in the past decade, universally applied

  16. Linear Discriminants • Underlying concept is to use discriminant functions to estimate a boundary or regression in feature space. • Use a hyperplane to create boundary of classes: wTx + b = 0. • A correct decision is given by y•g(x) = y•(wTx + b) > 0 ?

  17. wTx + b = 0 divides hyperspace into two subspaces. Distance to origin is b/||w||, where ||w|| is the norm of w. Distance to closest point is: wTxi + b γ = mini ||w|| Pictoral Reference

  18. wTxi + b γ = mini ||w|| Support Vector Machines:Classification • Recall the minimum distance to nearest point is: • This is called the margin. • It is natural that we would wish to maximize the margin. (Maximize this minimum distance.) • The SVM classifier that maximizes the margin under some normalization is

  19. The Soft Margin • Perhaps data is not well behaved. Not all data can be separated. • Introduce an extra “slack variable”.

  20. Support Vector Machine Regression • Classification has a tendency of “discretizing” our output • Can think of regression as kind of like a continuous version of classification, which would need infinite number of classes • Will approximate the function that given the known image information • Function is the relationship between known and unknown elements

  21. Support Vector Regression • Soft Margin SVMs for Classification: Wish to minimize { ||w||2 + C Σξi } subject to yi (wTxi + b) ≥ 1 – ξi ξi ≥ 0, for all i • Soft Margin SVMs for Regression: Wish to minimize { ||w||2 + C Σ( ξi+ + ξi- ) } subject to -ξi- ≤ | yi - (wTxi + b) | – ε ≤ ξi+ ξi-, ξi+ ≥ 0 • Rearranging, the Lagrangian can be written: L(w,b,ξ+,ξ-) = wTw + Σαi-((wTxi+b)-yi-ε+ξi-) + Σαi+(yi-(wTxi+b)i-ε+ξi+) + Σri-ξi- + Σri+ξi+

  22. SVR Cost Function and Dual 2/2 • The optimization is, for an ε and C chosen a priori: max W(α+, α-) = -εΣ (α+ + α-) + Σi (αi+ - αi-)yi - ½ Σi Σk (αi+ - αi-) (αk+ - αk-) k ( xi ,xk) subject to 0 ≤ α+, α-≤ C, Σi (αi+ - αi-) = 0 • The regression estimate will then have the form of: f(x) = Σi (α+ - α-) k (x, xi) + b

  23. Contributions • Unsupervised learning using SVM • Application of Support Vector Regression to Superresolution Problem • Direct Application (i.e. pixel prediction) • Indirect Application (i.e. filter coefficients) • Use additional equations to add structure and better results.

  24. FILTER COEFFICIENTS c11 c12 c13 c21 c22 c23 c31 c32 c33 SVR Spatial Filter Selection • Regression to find spatial domain filters • Direct regression actually works better f Regression

  25. Support Vector Regression:Frequency Domain • DCT Domain for statistical purposes (regression) • A downsampled version of an image is all the even samples. • From the DCT-II of a downsampled image, can we reconstruct the DCT-II of the original image?

  26. Decimation in Time K. R. Rao, P. Yip 1988, “Discrete Cosine Transform: Algorithms, Advantages, Applications”, San Diego, CA, USA: Academic Press

  27. Decimation in Space

  28. Applying Learning to DIT & DIS • We can rewrite decimation in time for DCT as a linear combination of the time/spatial domain terms corresponding to even and odd samples • Decimation in Time DCT-II(x) = k(m) (DCT-I(xeven) + DCT-I(xodd) + DCT-II(xeven) + DCT-II(xodd)) • Decimation in Time DCT-II(x) = k(m) { X1 + X2 + X3 + X4 } • Overall Idea DCT-II(x) = Input Signal + f (Input Signal) + g (Remaining Terms), f is exactly known and g is to be estimated • DCT-II(X2N) = DCT-II(XN) + Known + Estimated.

  29. Support Vector Regression • DCT-II(X2N) = DCT-II(XN)+Known+Estimated. • Given DCT-II(XN), can we determine the estimated terms that will give us DCT-II(X2N)? • Our regression is thus, Estimated= Σi (αi+ – αi-) K ( xi, DCT-II[XN] ) + b • This is done for all lower coefficients. • LibSVM & LSSVM, Regression package

  30. Bilinear Filtering SVR Spatial Filtering Spatial Domain (3x3 Filter) vs Bilinear Filtering

  31. Spatial Domain (3x3 Filter) vs Bilinear Filtering

  32. Close-up of Spatial Filtering Filter size = (3x3) SVR Filtering Bilinear

  33. Spatial Domain (5x5 Filter) vs Bilinear Filtering

  34. Small training set (10 frames) SVR Frequency Reconstruction vs Bilinear Interpolation Bilinear Interpolation SVR Frequency Regression

  35. Small training set (10 frames) SVM Frequency Reconstruction

  36. PSNR Values Method PSNR Bilinear 23.301 Bicubic 22.209 Spatial 25.995 Frequency 26.843 Comparison of SVR Algorithms

  37. Small training set (10 frames) SVM Frequency Reconstruction vs Bilinear Interpolation Bilinear Interpolation SVR Frequency Regression

  38. 4 x 4 features 8 x 8 features Effect of Dimensionality

  39. Structured Frequency Regression Structured versus Direct SVR Regression • Direct Frequency Regression

  40. Future Work • Apply superresolution to the error residual in video • Denoising algorithms using Support Vector Regression • Markov Random Fields, or some interrelationship between predicted values • Motion Prediction Values

  41. Questions?

More Related