1 / 30

Motion-corrected image denoising for digital photography

Brendan Duncan. Motion-corrected image denoising for digital photography. Noise in digital cameras. Noise in digital cameras:. P – # photons Q e – quantum efficiency of sensor t – exposure time D – dark current noise N r – read noise. Averaging multiple exposures to reduce noise.

piera
Télécharger la présentation

Motion-corrected image denoising for digital photography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brendan Duncan Motion-corrected image denoising for digital photography

  2. Noise in digital cameras • Noise in digital cameras: • P – # photons • Qe – quantum efficiency of sensor • t – exposure time • D – dark current noise • Nr – read noise

  3. Averaging multiple exposures to reduce noise • Averaging images effectively reduces exposure time, which increases the SNR • Averaging several short exposures is better than a single long exposure when the camera is handheld; a single exposure would result in blur • This requires aligning images prior to averaging

  4. Single exposure

  5. Average of multiple exposures

  6. Combining images without motion blur or ghosting: Previous solutions • Removing moving objects • O. Gallo, et al. Artifact-free high dynamic range imaging. ICCP, 2009. • Different use case, we want to preserve any moving object present in the base image

  7. Combining images without motion blur or ghosting: Previous solutions • Optical flow-based methods • B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. Image Understanding Workshop, 1981. • A. Ogale, et al. Motion segmentation using occlusions. Pattern Analysis and Machine Intelligence,2005. • Wong and Spetsakis. Motion segmentation and tracking. International Conference on Vision Interface, 2002. • These require a high framerate

  8. Combining images without motion blur or ghosting: Previous solutions • Video denoising algorithms that extend existing image denoising algorithms • Seo and Milanfar. Video denoising using higher order optimal space-time adaptation. International Conference on Acoustics, Speech and Signal Processing, 2008. • Potter and Elad. Image denoising via learned dictionaries and sparse representation. Transactions on Image Processing, 2009. • These think of video as a single 3D volume • Cannot extend regions of like pixels in time dimension in regions with motion because of low framerate

  9. Combining images without motion blur or ghosting: Previous solutions • Adaptive Spatio-Temporal Accumulation (ASTA) Filter • Bennett and McMillan. Video enhancement using per-pixel virtual exposures. SIGGRAPH, 2005. • Uses spatial filtering, which will cause loss in texture and detail

  10. Proposed solution • Align both foreground and background objects using SIFT features and RANSAC instead of optical flow techniques • Use a weighted average that will reduce the contribution from noisy pixels and from unmatched pixels

  11. Algorithm overview • SIFT feature detection • Calculate perspective fit using RANSAC • Apply perspective warp • Calculate RANSAC fit on outliers • Apply new perspective warp • Repeat from step 4 as needed • Calculate denoised estimate • Perform weighted average

  12. SIFT feature detection • Detect SIFT features in each of the images • D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004. • The features of the base image are inserted into a k-d tree • J. Friedman, et al. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 1977. • Use best bin first search to find shared features across images

  13. Calculate perspective fit using RANSAC • RANSAC randomly chooses a small subset of shared features to calculate a perspective warp that matches the features • Fischler and Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981. • It chooses the perspective warp with the lowest error across all features after a predetermined number of iterations

  14. Perspective warp • Equation for perspective warp • RANSAC finds the least-squares solution for each random sample

  15. Apply perspective warp (background)

  16. Calculate RANSAC fit on outliers • Previous inliers are removed from the feature set, and alignment is performed on the previously unmatched features • If background objects were aligned in the first RANSAC fit, now foreground objects will be aligned

  17. Apply perspective warp (foreground)

  18. Repeat if needed • Continue calculating RANSAC fits if there are unmatched features • Perform a perspective warp after each RANSAC stage • This accounts for multiple differently moving objects in the scene

  19. Goals for weighted average • To improve upon simple average, two goals • Assign negligible weight to unmatched areas • Reduce contribution from noisy pixels

  20. Use bilateral filter to get estimate • Bilateral filter is a nonlinear filter that • Does not blur across edges • Can be used to denoise images • Therefore it roughly meets our goals, except that it performs spatial averaging, which reduces texture and details • It is used as an estimate to help calculate weighted average

  21. Bilateral filter p, q – x,y pixel coordinates I – three channel image Sp– set of pixels surrounding Ip σs – spatial standard deviation – user defined σr – range standard deviation – user defined Wp– total weight for all pixels in Sp Tomasi and Manduchi. Bilateral filtering for gray and color images. International Conference on Computer Vision, 1998.

  22. Proposed weighted average I0 – base image I1 … In – warped images R – result image • The bilateral filtered estimate is used to determine expected mean of normal distribution and does not contribute to average

  23. Example of weights for an image black – low weight white – high weight

  24. Result of algorithm

  25. Close-up comparison Above – original base image Right – processed image

  26. Parallax • Parallax across images makes a simple average impossible

  27. Parallax Above – original base image Right – processed image

  28. Contribution from noisy pixels is reduced Simple average of 3 images Weighted average

  29. Texture is preserved Above – bilateral filter Right – result of algorithm (same σr)

  30. Questions?

More Related