1 / 23

High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga

High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga. CPSC 643 Presentation # 2 Brien Flewelling March 4 th , 2009. Overview. HDR Imaging Problem Motivation Methods Related Work Where it Started Sequential Images

Télécharger la présentation

High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Dynamic Range Imaging:Spatially Varying Pixel ExposuresShree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4th, 2009

  2. Overview • HDR Imaging • Problem • Motivation • Methods • Related Work • Where it Started • Sequential Images • Multiple Detectors, Adaptive Pixel Elements

  3. Overview • HDR Imaging using Spatially Varying Pixel Exposure • The method • Image Aquisition • Image Reconstruction • Experimental Results • Conclusions and Future Work

  4. High Dynamic Range Imaging: The Idea • Perceptible intensity values span a range far greater than can be sampled by a single image. • Using Various Techniques, Estimate the camera response function in order to accurately allocate bits in the grayscale to energy levels in the scene.

  5. Combining Information from Over Exposure and Under Exposure • Consider the projection of the illumination in a scene to be a function of energy rates. • Bright/Darker Regions have a higher probability of being over/under exposed for an arbitrary snapshot. It is the combination of various sampling techniques which allow us to display these regions together.

  6. Motivation: Why do we care? • High Dynamic Range images result in scene representations much more like what is seen by the human eye. • Artistic Purposes • Visual methods need good “landmarks” if they exist in over/under exposed regions, this can be problematic. • In tracking, a region could be over exposed ore under exposed frame to frame.

  7. Methods: How to Extract HDRI info • Sequential Exposures: • Multiple Images at Various Shutter speeds or Iris Settings • Solve a subset of pixel correspondences as an array of linear systems • Solve for the camera response function • Map the results to the image

  8. Methods: How to Extract HDRI info • Multiple Image Detectors • Use optical elements to generate mutiple images sampled by different imagers • The images may have varying sensitivities, resolution, or exposure times. • More Expensive but can handle moving objects better.

  9. Multiple Sensor Elements in Each Pixel • Reduces Resolution by a factor of 2 • Simple Combination of neighboring elements with different potential well depths. • Overall a disregarded approach since the sensor cost is greater and performance gain is not very high.

  10. Adaptive Pixel Exposure • Vary the pixels sensitivity as a function of the amount of time for its potential well to fill. • Feedback System • An Interesting and Promising Approach but.. • Expensive for large scale chip designs • Very sensitive to motion or blur effects in low light scenes

  11. Related Work: Where it Started • [Blackwell, 1946] H. R. Blackwell. Contrast thresholds of the human eye. Journal of the Optical Society of America, 36:624–643, 1946. • Blackwell Studies the variations in perceptible illumination that the human eye detects in a scene. • Many patents on HDR CCD sensors in the 1980’s • Sequential Methods for HDR Image Generation • Early 1990’s

  12. Related Work: Sequential Exposures • [Azuma and Morimura, 1996], [Saito,1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda,1998], [Takahashi et al., 1997], [Burt and Kolczynski,1993], [Madden, 1993] [Tsai, 1994]. [Mann and Picard,1995], [Debevec and Malik, 1997] and [Mitsunagaand Nayar, 1999] • The final paper extends the estimation to include the radiometric response function of the camera

  13. Related Work: Hardware Solutions • Multiple Imagers • [Doi etal., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998],[Ikeda, 1998] • Adaptive Pixel Elements • [Street, 1998], [Handy, 1986], [Wen, 1989], [Hamazaki, 1996], [Murakoshi, 1994] and [Konishi et al.,1995] • [Brajovic and Kanade, 1996].

  14. Spatially Varying Pixel Exposure • The SVE (Spatially Varying Exposure Image. • Let a 2x2 array of pixels be subject to exposures e0,e1,e2,e3 • Let this array be repeated in a mask for the entire image

  15. How Does this Increase the DR?

  16. How Many Grays? (846) K = # of exposure levels : 4 q = # of quantization levels per pixel: 256 R = Round off function ek = exposure level

  17. Spatial Resolution Reduction • Not a reduction by a factor of 2! • Low exposure level pixels could be noise dominated for dim regions • High exposure level pixels could be saturated in bright regions. • In general the spatial resolution is not significantly reduced.

  18. Image Reconstruction by Aggregation • Simple Averaging • Convolution with a 2x2 box filter • Results in a piecewise linear function which is like a gamma function with gamma > 1 • Overall produces good HDR results except at sharp edges

  19. Image Reconstruction by Interpolation • If sharp features are important, the image brightness value M(i,j) are scaled by their exposures to produce M’(i,j). • Remove all underexposed, and saturated pixels • Determine the ‘Off-grid’ points from the undiscarded ‘On-grid’ points by interpolation. The above equation is the cubic interpolation kernel which is used in the least squares estimation for the off grid points

  20. Solving for Offgrid Values by the Interpolation Kernel M: 16x1 on-grid brightness values F: 16x49 cubic interpolation elements Mo: 16x1 off-grid brightness values

  21. Experimental Results - Simulation

  22. Results

  23. Future Work • Prototype was still being developed • Simulation proved useful in the estimation of the nonlinear response function, can it be used to estimate properties of scene objects? • Can this be used to estimate/handle motion blur for moving objects? • What is an optimal pattern for variation of pixel exposures?

More Related