1 / 49

Course 3: Computational Photography

Course 3: Computational Photography. 3: Improvements to Film-like Photography. Jack Tumblin Northwestern University. Improving FILM-LIKE Camera Performance. What would make it ‘perfect’ ? Dynamic Range Vary Focus Point-by-Point Field of view vs. Resolution Exposure time and Frame rate.

blanchel
Télécharger la présentation

Course 3: Computational Photography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Course 3: Computational Photography 3: Improvements to Film-like Photography Jack Tumblin Northwestern University

  2. Improving FILM-LIKE Camera Performance What would make it ‘perfect’ ? • Dynamic Range • Vary Focus Point-by-Point • Field of view vs. Resolution • Exposure time and Frame rate

  3. The Photographic Signal Path Computing can improve every component: Light Sources Sensors Processing, Storage Optics Optics Rays Display “Scene” Rays

  4. Computational Photography Novel Illumination Light Sources Novel Cameras GeneralizedSensor Generalized Optics Processing Display Scene: 8D Ray Modulator Recreate 4D Lightfield

  5. Computable Improvements (just a partial list) • Sensors Parallel Sensor Arrays, High Dynamic Range, frame rate, resolution; 2, 3, 4-D Sensing, Superresolution • Sensor Optics image stabilization, ‘steerable’ subpixels (Nayar), reconstructive sensing, ‘pushbroom’, omnidirectional, multilinear • Lights Dynamic Range (‘Best tone map is no tone map’ –P. Debevec) 2,3,4D lightfield creation • Light Optics Steered beam, ‘structured light’, ad-hoc/handheld, 2,3,4D control • Processing Selective merging, Graph Cut/Gradient, boundaries Synthetic Aperture, virtual viewpoint, virtual camera, Quantization, optic flaw removal • Display Ambient Light control, Light-Sensitive Display, True 4-D static white-light Holography (Zebra Imaging)

  6. Common Thread: Existing Film-like Camera quality is VERY HIGH, despite low cost. Want even better results? • Make MANY MEASUREMENTS • Merge the best of them computationally *Exceptions* to this may prove most valuable? Or will we learn to make arrays practical?

  7. ‘Virtual Optics’MIT Dynamic Light Field Camera • Multiple dynamic Virtual Viewpoints • Efficient Bandwidth usage:‘send only what you see’ • Yang, et al 2002 • 64 tightly packed commodity CMOS webcams • 30 Hz, Scaleable, Real-time:

  8. Camera, Display Array:MERL 3D-TV System • 16 camera Horizontal Array • Auto-Stereoscopic Display • Efficient coding, transmission Anthony Vetro, Wojciech Matusik, Hanspeter Pfister, and Jun Xin

  9. Stanford Camera Array (SIGG 2005)

  10. R R R R R R R R R R R R R R R R G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G B B B B B B B B B B B B B B B B ‘Assorted Pixels’ Sensor Bayer Grid Interleaved color filters. Lets interleave OTHER assorted measures too ‘De-mosaicking’ helps preserve resolution…

  11. Improving FILM-LIKE Camera Performance What would make it ‘perfect’ ? • Dynamic Range

  12. Film-Style Camera: Dynamic Range Limits • Under-Exposure • Highlight details: Captured • Shadow details: Lost • Over-Exposure • Highlight details: Lost • Shadow details: Captured

  13. Problem:Map Scene to Display Domain of Human Vision: from ~10-6 to ~10+8 cd/m2 starlight moonlight office light daylight flashbulb 10-6 10-2 1 10 100 10+4 10+8 ?? ?? 0 255 Range of Typical Displays: from ~1 to ~100 cd/m2

  14. High dynamic range capture (HDR) • overcomes one of photography’s key limitations • negative film = 250:1 (8 stops) • paper prints = 50:1 • [Debevec97] = 250,000:1 (18 stops) • hot topic at recent SIGGRAPHs 

  15. j=0 1 2 3 4 5 6 Debevec’97 (see www.HDRshop.com) j=0 Pixel Value Z j=1 i=2 j=3 j=4 j=5 j=6 logLi ? f(logL) STEP 1: --number the images ‘i’, --pick fixed spots (xj,yj) that sample scene’s radiance values logLi well:

  16. j=0 1 2 3 4 5 6 Debevec’97 (see www.HDRshop.com) j=0 Pixel Value Z j=1 i=2 j=3 j=4 j=5 j=6 logLi ? f(logL) STEP 2: --Collect pixel values Zij (from image i, location j) --(All of them sample the response curve f(logL)…)

  17. j=0 1 2 3 4 5 6 F(logL) ? Zij ? Pixel Value Z Pixel Value Z logL Debevec’97 (see www.HDRshop.com) • Use the multiple samples to reconstruct the response curve; • Then use the inverse response curve to reconstruct the intensities that caused the responses logLi

  18. HDR Direct Sensing? • An open problem! (esp. for video...) • A direct (and expensive) solution: • Flying Spot Radiometer: brute force instrument, costly, slow, delicate • Some Other Novel Image Sensors: • line-scan cameras (e.g. Spheron: multi-detector) • logarithmic CMOS circuits (e.g. Fraunhofer Inst) • Self-resetting pixels (e.g. sMaL /Cypress Semi) • Gradient detectors (CVPR 2005 Tumblin,Raskar et al)

  19. HDR From Multiple Measurements Captured Images (Courtesy Shree Nayar, Tomoo Mitsunaga 99) Ginosar et al 92, Burt & Kolczynski 93, Madden 93, Tsai 94, Saito 95, Mann & Picard 95, Debevec & Malik 97, Mitsunaga & Nayar 99, Robertson et al. 99, Kang et al. 03 Computed Image

  20. Sequential Exposure Change: Ginosar et al 92, Burt & Kolczynski 93, Madden 93, Tsai 94, Saito 95, Mann 95, Debevec & Malik 97, Mitsunaga & Nayar 99, Robertson et al. 99, Kang et al. 03 time Mosaicing with Spatially Varying Filter: Schechner and Nayar 01, Aggarwal and Ahuja 01 time Multiple Image Detectors: Doi et al. 86, Saito 95, Saito 96, Kimura 98, Ikeda 98, Aggarwal & Ahuja 01, … MANY ways to make multiple exposure measurments

  21. R R R R R R R R R R R R R R R R G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G B B B B B B B B B B B B B B B B MANY ways to make multiple exposure measurements Multiple Sensor Elements in a Pixel: Handy 86, Wen 89, Murakoshi 94, Konishi et al. 95, Hamazaki 96, Street 98 Assorted Pixels: Generalized Bayer Grid: Trade resolution for multiple exposure,color Nayar and Mitsunaga oo, Nayar and Narasimhan 02

  22. Camera with Assorted Pixels Digital Still Camera Assorted-Pixel Camera Prototype ( Courtesy : Sony Kihara Research Lab )

  23. Computational Pixels: Brajovic & Kanade 96, Ginosar & Gnusin 97 Serafini & Sodini 00 Another Approach: Locally Adjusted Sensor Sensitivity ( pixel sensivity set by its illumination) NO GRADIENT CAMERA: RAMESH HAS IT

  24. Sensor: LCD Adaptive Light Attenuator light Unprotected 8-bit Sensor Output: attenuator element T t+1 detector element Controller I t LCD Light Attenuator limits image intensity reaching 8-bit sensor Attenuator-Protected 8-bit Sensor Output

  25. Improving FILM-LIKE Camera Performance • Vary Focus Point-by-Point

  26. High depth-of-field Levoy et al., SIGG2005 • adjacent views use different focus settings • for each pixel, select sharpest view [Haeberli90] close focus distant focus composite

  27. Single-Axis Multi-Parameter Camera (SAMP) Idea: Cameras + Beamsplitters Place MANY (8) camerasat same virtual location 2005: Morgan McGuire (Brown), Wojciech Matusik (MERL), Hanspeter Pfister (MERL), Fredo Durand (MIT), John Hughes (Brown), Shree Nayar (Columbia)

  28. SAMP Prototype System (Layout)

  29. Multiple Simultaneous Focus Depths ‘Co-located’ Lenses Back Fore zF zB Fore & BackFocal Planes Strongly desired in microscopy, too: see http://www.micrographia.com/articlz/artmicgr/mscspec/mscs0100.htm

  30. Synthetic aperture photography Long history in RADAR, New to photography See ‘Confocal Photography’--Levoy et al. SIGG2004 GREEN In-focus point RED out-of-focus point

  31. Synthetic aperture photography Smaller aperture   less blur, smaller circle of confusion

  32. Synthetic aperture photography Merge MANY cameras to act as ONE BIG LENS Small items are so blurry they seem to disappear..

  33. Synthetic aperture photography Huge lens’ ray bundle is now summed COMPUTATIONALLY:

  34. Synthetic aperture photography Computed image: large lens’ ray bundle Summed for each pixel

  35. Synthetic aperture photography Computed image: large lens’ ray bundle Summed for each pixel

  36. Long-rangesynthetic aperture photography

  37. Focus Adjustment: Sum of Bundles

  38. Improving FILM-LIKE Camera Performance • Field of view vs. Resolution? Are we done? • Almost EVERY digital camera haspanoramic stitching. No; Much more is possible:

  39. A tiled camera array • 12 × 8 array of VGA cameras • abutted: 7680 × 3840 pixels • overlapped 50%: half of this • total field of view = 29° wide • seamless mosaicing isn’t hard • cameras individually metered • Approx same center-of-proj.

  40. Tiled panoramic image(before geometric or color calibration)

  41. Tiled panoramic image(after calibration and blending)

  42. 1/60 1/60 same exposure in all cameras 1/60 1/60 1/120 1/60 individually metered 1/60 1/30 1/60 1/120 same and overlapped 50% 1/60 1/30

  43. Improving FILM-LIKE Camera Performance • Exposure time and Frame rate

  44. High Speed Video • Say you want 120 frame per second (fps) video. • You could get one camera that runs at 120 fps • Or…

  45. High Speed Video • Say you want 120 frame per second (fps) video. • You could get one camera that runs at 120 fps • Or… get 4 cameras running at 30 fps.

  46. 52 Camera Cluster, 1560 FPS Levoy et al., SIGG2005

  47. Conclusions • Multiple measurements: • Multi-camera, multi-sensor, multi-optics, multi-lighting • Intrinsic limits seem to require it • lens diffraction limits, noise, available light power. • Are we eligible for Moore’s law?Or will lens making, mechanics limit us?

More Related