1 / 82

Lecture 25: More on Lightness Recovery

Lecture 25: More on Lightness Recovery. CAP 5415. Announcements. Project write-ups due December 2. Goal: Find the albedo. Take this image. Observation. Shading Information. Albedo Information. By splitting it into two images that represent the shading and albedo of the scene.

Télécharger la présentation

Lecture 25: More on Lightness Recovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 25: More on Lightness Recovery CAP 5415

  2. Announcements • Project write-ups due December 2

  3. Goal: Find the albedo Take this image Observation Shading Information Albedo Information By splitting it into two images that represent the shading and albedo of the scene

  4. An image is formed as x Shading Image Reflec Reflec Reflec Observed Image Albedo Image

  5. Our goal is to invert the process • Using images to represent characteristics that are intrinsic to the scene’s appearance • Known as intrinsic image representation[Barrow and Tenenbaum 1978] Observed Image Shading Image Albedo Image

  6. Basic Strategy for Estimating Images Pseudo-Inverse Classify Every Derivative as Shading or Albedo Change Estimate of the horizontal derivative Observation Estimate of the Albedo Image Similar to Horn’s Implementation Estimate of the vertical derivative

  7. Local Color Evidence For a Lambertian surface, and simple illumination conditions, shading only affects the intensity of the color of a surface Notice that the chromaticity of each face is the same Any change in chromaticity must be a reflectance change

  8. Result using only color information

  9. Results Using Only Color • Some changes are ambiguous • Intensity changes could be caused by shading or reflectance • So we label it as “ambiguous” • Need more information Input Reflectance Shading

  10. Utilizing local intensity patterns • The painted eye and the ripples of the fabric have very different appearances • Can learn classifiers which take advantage of these differences

  11. abs F Ip Using Local Image Patterns • Create a set of weak classifiers that use a small image patch to classify each derivative • The classification of a derivative: > T 

  12. From Weak to Strong Classifiers: Boosting • Individually these weak classifiers aren’t very good • Can be combined into a single strong classifier • Call the classification from a weak classifier hi(x) • Each hi(x) votes for the classification of x (-1 or 1) • Those votes are weighted and combined to produce a final classification

  13. Learning the Classifiers • The weak classifiers, hi(x), and the weights αare chosen using the AdaBoost algorithm • Train on synthetic images • Assume the light direction is from the right • Filters for the candidate weak classifiers—cascade two out of these 4 categories: • Multiple orientations of 1st derivative of Gaussian filters • Multiple orientations of 2nd derivative of Gaussian filters • Several widths of Gaussian filters • impulse

  14. Training Set Shading Training Set Reflectance Change Training Set

  15. Classifiers Chosen • These are the filters chosen for classifying vertical derivatives when the illumination comes from the top of the image. • Each filter corresponds to one hi(x)

  16. Examining These Classifiers • Learned rules for all (but classifier 9) are: if rectified filter response is above a threshold, vote for reflectance. • Classifier 1 (the best performing single filter to apply) is an empirical justification for Retinex algorithm: treat small derivative values as shading. • The other classifiers look for image structure oriented perpendicular to lighting direction as evidence for reflectance change.

  17. Results Using Only Gray-scale Information Reflectance Image Input Image Shading Image

  18. Results without considering gray-scale Using Both Color and Gray-Scale Information

  19. Reflectance Shading Some Areas of the Image Are Locally Ambiguous Is the change here better explained as Input ? or

  20. Propagating Information • Can disambiguate areas by propagating information from reliable areas of the image into ambiguous areas of the image

  21. Why Propagate? • One of these patches is an albedo change and the other is shading

  22. Can solve the problem with more information

  23. Propagating Labels Even using only a patch of the image, I can see that these corners are albedo changes

  24. Propagating Labels

  25. Propagating Information • Extend probability model to consider relationship between neighboring derivatives • β controls how necessary it is for two nodes to have the same label • Use Generalized Belief Propagation to infer labels. (Yedidia et al. 2000)

  26. Brief Intro to MRF’s • Imagine four random variables • Assume A, given B and C, is conditionally independent of D • If I know B and C, I know everything there is to know about A. • D doesn’t matter A B C D

  27. Brief Intro to MRF’s • I can draw edges between nodes to denote dependency relationships • Given the nodes that A is connected to, it is conditionally independent of everything else • Only the nodes that A is connected to matter for inferring its value A B C D

  28. A B C D Brief Intro to MRF’s • Each edge is associated with a compatibility function (clique potential) • For discrete variables, this can be thought of as a matrix relating the probability of each combination of the possible values of each variable

  29. Propagating Information • Extend probability model to consider relationship between neighboring derivatives • β controls how necessary it is for two nodes to have the same label • Use Generalized Belief Propagation to infer labels. (Yedidia et al. 2000) Classification of a derivative

  30. β= 0.5 1.0 Setting Compatibilities • All compatibilities have form • Assume derivatives along image contours should have the same label • Set β close to 1 when the derivatives are along a contour • Set β to 0.5 if no contour is present • β is computed from a linear function of the image gradient’s magnitude and orientation

  31. Improvements Using Propagation Input Image Reflectance Image Without Propagation Reflectance Image With Propagation

  32. Reflectance Shading Clothing catalog image Original (from LL Bean catalog)

  33. Sign at train crossing

  34. original shading reflectance Separated images Note: color cue omitted for this processing

  35. Algorithm output. Note: occluding edges labeled as reflectance. Finally, returning to our first example… input Ideal shading image Ideal paint image

  36. Fundamental Limitation • Hand-Wired • The propagation step is designed around the heuristic that information should be propagated along edges in the image • We chose this heuristic and hand-wired it. • Ideally, the model should be as “data-driven” as possible. • The machine should learn how to propagate

  37. But this is hard! • Maximum likelihood methods for learning to set the clique potential functions of a discrete-valued MRF are basically intractable. • There are approximate techniques [Yedidia et al. 2000, Wainwright et al., 2003] • Convergence, numerical instability issues • There are also fundamental problems with label-based system

  38. Problem #1 with Classification • Not appropriate when shading and albedo changes coincide

  39. Problem #2 with Classification • Not natural for estimating other types of intrinsic characteristics + = Observation Scene Image Noise Image

  40. For the rest of the this talk, • I will present a system that overcomes these limitations by • Treating the problem as a continuous estimation problem • Learning how to propagate information from the training data

  41. Basic Strategy for Estimating Images Pseudo-Inverse Classify Every Derivative as Shading or Albedo Estimate of the horizontal derivative Observation Estimate of the Albedo Image Estimate of the vertical derivative

  42. Basic Strategy for Estimating Images Pseudo-Inverse Make continuous estimates of the derivatives Estimate of the horizontal derivative Estimate of the Shading Image Observation Estimate of the vertical derivative

  43. Basic Idea • Train an estimator to minimize the squared-error in the predictions of the derivatives • Why is this hard? • Every pixel = 1 Training Example • One 110 x 110 image = ~10,000 Training Examples • Have to produce estimates at every pixel

  44. Estimating Instead of Classifying • Possible “Off-the-shelf” Estimators • Kernel Density Regression • Can use a kd-tree to make it fast • Benefits diminish as dimensionality of observations decrease • Nearest-Neighbor Regression • Fast using Locality Sensitive Hashing • Would like an estimator that summarizes the data • Support Vector Regression • We use a Mixture of Experts Estimator instead

  45. A simple 1D Example • Want to predict y from the observation o • Each ‘x’ is a training example y o

  46. Mixture of Experts [(Jacobs et al., 1991] y “Around this point, use this linear function to predict y from o o

  47. Same idea for estimating shading Choose image patches to serve as experts for predicting the shading derivative from similar observations

  48. How do we choose the experts? • Choosing the experts (and their number) is the hard part • Don’t know how many experts we need • Large training set Try a greedy, stagewise algorithm for adding experts • Inspired by AdaBoost algorithm (Freund and Schapire) for learning a classifier • Performs surprisingly well

More Related