1 / 31

Northeastern University, Fall 2005 CSG242: Computational Photography

Northeastern University, Fall 2005 CSG242: Computational Photography. Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Oct 5th, 2005. Course WebPage : http://www.merl.com/people/raskar/photo/course/. Plan for Today. Camera Focus Second Programming Assignment

macklinj
Télécharger la présentation

Northeastern University, Fall 2005 CSG242: Computational Photography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Northeastern University, Fall 2005CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Oct 5th, 2005 Course WebPage : http://www.merl.com/people/raskar/photo/course/

  2. Plan for Today • Camera Focus • Second Programming Assignment • Due Oct 19th • Mid Term • Oct 26th • Project Proposals Due • November 2nd • Term paper cancelled • Student papers instead, • 2 per student, 15 mins each • Reading list will be on the web

  3. Taking Notes • Use slides on the FTP site • Write down anecdotes and stories • Try to get what is NOT on the slide • Summarize questions and answers

  4. Assignments • Create Webpage • Link to each assignment from webpage • Final project • Create proposals • Progress will be documented online • Send email with subject line • [Photo] Assignment N Firstname Lastname • Webpage for course projects • Good for online resume !

  5. Focus

  6. Project 2, Due Oct 19th High Depth of Field Images In this assignment, you will combine multiple pictures to create a high depth of field image Then you will create special effects (i) Change depth of field (ii) Change orientation of slab of focus Useful links http://grail.cs.washington.edu/projects/photomontage/ http://www.sgi.com/misc/grafica/depth/index.html -------------------------------------------- 1. Here is the high level code for HDF creation Read in images 1 to n For each pixel Find local variance in n images Keep pixel with largest variance This should generate image where all pixels are in focus Create labeling for which pixel came from which image This labeling creates approximate (psuedo) depth map for the image 2. Special effects 2a. Changing depth of field Now you want to create an image where depth of field is larger than in the input images but outside the depth of field the image is blurred Use the psuedo-depth map created above and choose samples accordingly 2b. Changing orientation for plane of focus For each shaft (group of columns) left to right choose shaft from successive image 1 to n. You will create an image where plane of focus shifts from left to right Experiment to create other focus planes orientations. ---------------------------Assignment Description--------------- Dataset1 Download from bug.zip from http://grail.cs.washington.edu/projects/photomontage/ Dataset2 Take atleast 5 pictures with varying focus ------------------------------------------------- Create HDF image Average the five images in Matlab to get an idea of what you should get Write Matlab code to create your own HDF image Implement special effects Submit Due Oct 19th All code For dataset1: Average, label map, special effects results For dataset2: input images and three results Send me a link to your assignment

  7. Focus

  8. “circle of confusion” Focus and Defocus • A lens focuses light onto the film • There is a specific distance at which objects are “in focus” • other points project to a “circle of confusion” in the image • How can we change focus distance? Slide by Steve Seitz

  9. Depth of Field http://www.cambridgeincolour.com/tutorials/depth-of-field.htm

  10. Depth of Field DoF  Distance to Object Aperture * Focal Length

  11. Aperture controls Depth of Field • Changing the aperture size affects depth of field • A smaller aperture increases the range in which the object is approximately in focus • But small aperture reduces amount of light – need to increase exposure

  12. FOV depends of Focal Length f Smaller FOV = larger Focal Length

  13. Fun with Focal Length (Jim Sherwood) http://www.hash.com/users/jsherwood/tutes/focal/Zoomin.mov

  14. Perspective vs. viewpoint • Telephoto makes it easier to select background (a small change in viewpoint is a big change in background.

  15. Focal length: pinhole optics • What happens when the film is half the size? • Application: • Real film is 36x24mm • On the 20D, the sensor is 22.5 x 15.0 mm • Conversion factor on the 20D? • On the SD500, it is 1/1.8 " (7.18 x 5.32 mm) • What is the 7.7-23.1mm zoom on the SD500? f d 2f ½ s Film/ sensor scene pinhole

  16. F-stop • Fstop = diameter of the aperture focal length of the lens • On a 50mm lens, f/2 is saying that the diameter of the aperture is 25mm

  17. Where do F-stop numbers come from ? http://www.kevinwilley.com/l3_topic01.htm

  18. Exposure versus f/stop For same amount of light

  19. Depth of field • Two views: object-centered or sensor-centered Point in focus sensor lens Object with texture Circle of confusion Point in focus sensor lens Object with texture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  20. Depth of field • Same in front of focusing distance • Except focuses behind the film Point in focus sensor lens Object with texture Circle of confusion Point in focus sensor lens Object with texture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  21. Depth of field • We allow for some tolerance Depth of field Point in focus sensor lens Object with texture Depth of focus Max acceptable circle of confusion Point in focus sensor lens Object with texture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  22. Depth of field • What happens when we close the aperture by two stop? • Aperture diameter is divided by two • Depth of field is doubled Diaphragm Point in focus sensor lens Object with texture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  23. DoF Depends on aperture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  24. Depth of field depends on focusing distance • What happens when we divide focusing distance by two? • Similar triangles => divided by two as well Half depth of field Half depth of field Point in focus sensor lens Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  25. Depends on focusing distance Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  26. Depends on focal length • Remember definition of f stop = diameter/focus length Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  27. Depth of Field DoF  Distance to Object Aperture * Focal Length

  28. Aperture: Stops and Pupils • Principal effect: changes exposure • Side effect: depth of field

  29. Depth of field • It’s all about the size of the lens aperture Point in focus sensor lens Object with texture lens sensor Point in focus Object with texture Frédo Durand — MIT Computer Science and Artificial Intelligence Laboratory - fredo@mit.edu

  30. Auto Focus Autofocus systems: (from Goldberg) o active methods time of flight using ultrasonic chirp, or triangulation using narrow beam of infrared light both fail at long distances, dark or reflective objects, etc. o

  31. Auto Focus Autofocus systems: (from Goldberg) o passive methods - these methods require a reasonably bright scene - two-view methods (for non-SLR cameras) compares 1D images from two viewpoints using 1D CCD strip, looks for match - contrast method compares contrast of images at three depths, if in focus, image will have high contrast, else not - phase methods compares two parts of lens at the sensor plane, if in focus, entire exit pupil sees a uniform color, else not - assumes object has diffuse BRDF - a variant directs two parts of lens through lenslets, if in focus, left and right halves of lenslet images are identical -> Goldberg, p. 41 - another variant directs two parts of lens through lenslets, if in focus, distance between imagelets matches a reference (similar in spirit to split wedge focusing) -> Goldberg, p. 43

More Related