1 / 25

Image/Video Deblurring using a Hybrid Camera

Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying Deblur. Image/Video Deblurring using a Hybrid Camera. Project Page (visit): http://www.comp.nus.edu.sg/~yuwing. 1 / 25.

cicero
Télécharger la présentation

Image/Video Deblurring using a Hybrid Camera

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen LinCVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying Deblur Image/Video Deblurring using a Hybrid Camera Project Page (visit): http://www.comp.nus.edu.sg/~yuwing 1 / 25

  2. Image Deblurring: The Problem • Given a motion blurred image, we want to recover a sharp image: Point Spread Function (PSF) Motion blur Kernel Desired Output Input 2 / 25

  3. Why this is a difficult problem ? Blur kernel is known: This is an ill-posed under constrained problem:Different inputs can produce the same (very similar) output after convolution 3 / 25

  4. Blind deconvolution problem Blur kernel is unknown: 4 / 25

  5. Two causes for motion blur Blur is the same Blur is different Hand shaking (Camera “ego motion”) Object motion 5 / 25

  6. Properties of motion blur • Hand shaking • PSF is globally the same for the whole image • Observations are the whole image • Deconvolution is a global process • Relatively Easy – ``Well studied, some current works produce very good results’’ • Object Motion • PSF is varying across the whole image • Observations are only valid for local regions • Deconvolution is a local process • Have problem of mixing colors • Might have problem of occlusions and disocclussions • Very Difficult – ``Nothing closed, there is still have no good solution’’ 6 / 25

  7. Related works (Hand shaking) arg min I,K f(I◦K – B) Traditional approaches: • Wiener filter [Wiener, 1949] • Richardson and Lucy [Richardson 1972; Lucy 1974] Recent approaches: • Regularization based: • Total variation regularization [Dey et al. 2004] • Natural image statistics [Fergus et al. Siggraph 2006] • Alpha matte [Jia CVPR 2007] • Multiscale regularization [Yuan et al. Siggraph 2008] • High-order derivatives of gaussian model [Shan et al. Siggraph 2008] • Auxiliary information: • Different exposure [Ben-Ezra and Nayar, CVPR 2003] • Flutter shutter [Raskar et al. Siggraph 2006] • Coded aperture and sparsity prior [Levin et al. Siggraph 2007] • Blurred and noisy pairs [Yuan et al. Siggraph 2007] • Two blurred Images [Rav-Acha and Peleg2005; Chen and Tang CVPR2008] arg min I,K f(I◦K – B) + Regularization Terms arg min I,K f(I◦K’ – B’) + Regularization Terms 7 / 25

  8. Related works (Object motion) • Translational motion • Natural image statistics [Levin, NIPS 2006] • Two blurred Images [Cho et al. ICCV 2007] • Motion Invariant Photography [Levin et al., Siggraph 2008] • In-plane rotational motion • Shan et al. ICCV 2007 • Our approach [CVPR 2008] • Handle motion blur from both hand shaking and object moving • Handle translational, in-plane/out-of-plane rotational, zoom-in motion blur in a unified framework 8 / 25

  9. Basic idea [Ben-Erza CVPR’03] Hi-Resolution Low Frame-rate Low-Resolution Hi-Frame-rate time Motion blur exist in high resolution images. Our goal is to deblur the high resolution images with assistance from low resolution, high frame rate video. Observation: Tradeoff between Resolution and Exposure Time 9 / 25

  10. Our Hybrid Camera Hi-Res: 1024 x768 resolution at 25 fps Low-Res: 128 x 96 resolution at 100 fps. A beam-splitter is use to align their optical axes Dual-video capture synchronized by hardware High-Res Camera Low-Res Camera Beam-splitter 10 / 25

  11. Observation 1 • Spatially-varying motion blur kernels can be approximated by motion vector from low resolution video Motion Blur Kernels K Low-Resolution High Frame-rate Hi-Resolution Low Frame-rate 11 / 25

  12. Observation 2 • The deblured image, after down-sampling, should look similar to the low resolution image Deblurred Hi-Resolution Image Low-Resolution Image 12 / 25

  13. Our Formulation (Main Algorithm) • Bayesian ML/MAP model: I : Deblurred ImageK: Estimated Blur KernelIb: Observed High Resolution Blur ImageIl: Observed Low Resolution Shape Image Sequences Ko: Observed Blur Kernel from optical flow computation 13 / 25

  14. Optimization Procedure Deconvolution Eq. • Global Invariant Kernel (Hand Shaking) • Spatially varying Kernels (Object Moving) Kernel Reg. Low Resolution Reg. 14 / 25

  15. Moving Object Extraction • Moving object appears sharp in the high-frame-rate low-resolution video • Perform binary moving object segmentation in the low-resolution images • Compose the binary masks with smoothing to approximate the alpha matte in the high-resolution image Problem with mixing color 15 / 25

  16. Results • Image Deblurring: • Hand-shaking Motion Blur (Global Motion) • In-plane Rotational Motion Blur • Translational Motion • Zoom-in motion • Video Deblurring • Moving box: arbitrary in-plane motion • Moving car towards camera: translational + zoom in motion 16 / 25

  17. Results • Hand Shaking (Motion blur = Global) Input [Fregus et. al. Siggraph’06] [Ben-Ezra et. al. CVPR’03] Back Projection Our Result Ground Truth 17 / 25

  18. Results • Rotational Motion (Motion Blur = Spatially-varying) Input [Shan et. al. ICCV’07] [Ben-Ezra et. al. CVPR’03] Our Result Ground Truth Back Projection 18 / 25

  19. Results • Translational Motion (Motion Blur = Global for object) Input [Fregus et. al. Siggraph’06] [Ben-Ezra et. al. CVPR’03] Back Projection Our Result Ground Truth 19 / 25

  20. Results • Zoom-in motion (Motion Blur = Spatially-varying) Input [Fregus et. al. Siggraph’06] [Ben-Ezra et. al. CVPR’03] Back Projection Our Result Ground Truth 20 / 25

  21. Results (moving object) In-plane Rotation [Show video] 21 / 25

  22. Results (moving object) Out-of-plane Motion (zoom translate) [Show video] 22 / 25

  23. Limitations and Discussion • High frequency lost during the convolution process cannot be recovered • Small ringing artifacts cannot be removed • Basic assumptions: • Constant Illumination during exposure • Rigid objects • Moving objects are not overlapped • Problems in separating moving objects from moving background 23 / 25

  24. Summary of Image/Video Deblurring • Hybrid camera framework • Extended to spatially varying motion blur • Extended to video • Combined Deconvolution and Backprojection • Effective in reducing ringing artifacts • Effective in recovering motion blurred details • Formulated into a Bayesian ML/MAP Solution 24 / 25

  25. Thank you! (Question/Answers) Personal Homepage: http://www.comp.nus.edu.sg/~yuwing/ 25 / 25

More Related