1 / 41

Introduction to Digital Image Analysis

Introduction to Digital Image Analysis. Padhraic Smyth Information and Computer Science CS 175, Fall 2007. Digital Image Analysis and Computer Vision. Real World Scene. Image Capture Device. Digital Image. Vision Algorithms. Image Capture. Incident Light. Light Source.

zayit
Télécharger la présentation

Introduction to Digital Image Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Digital Image Analysis Padhraic Smyth Information and Computer Science CS 175, Fall 2007

  2. Digital Image Analysis and Computer Vision Real World Scene Image Capture Device Digital Image Vision Algorithms

  3. Image Capture Incident Light Light Source

  4. Image Capture Incident Light Light Source Reflected Light Imaging Device

  5. Imaging Devices (e.g., CCD Cameras) Pixel Element (light-sensitive measurement) Reflected Light

  6. Factors affecting Image Capture • Light Source • position and angle relative to imaged scene • strength of light source • 3-Dimensional Imaged Scene • 3-dimensional arrangement of objects • orientation of objects • relative position of objects • to each other (occlusion) • to light source (shadowing) • surface reflectance • how much light is reflected from different object surfaces • surface texture • nature of object’s texture (contributes to reflectance)

  7. More Factors affecting Image Capture • Imaging Device • viewpoint: position and angle of device relative to imaged scene • sensitivity of measuring device • noise of measuring device • resolution: • for an object 1 meter away, how many pixels are used to represent the image

  8. Image Resolution N Pixels • What does a pixel measure? • each pixel measures the light intensity coming from a specific area of the image scene (for monochrome imaging) • essentially the average photon intensity of that region • Image Resolution • 1/N is the horizontal resolution, 1/M the vertical • e.g., a 1 meter wide object filling the image • can see details to 1/N meter resolution M pixels

  9. Intensity Quantization N Pixels • Quantizing the Light Intensity • each pixel literally measures the incident photons at that location • this is quantized (A/D converter) for digital representation • the photon counts are “binned” and converted to a fixed scale • e.g., 8-bit, 1 to 256 • 1 corresponds to no photons (dark) • 256 corresponds to the maximum brightness M pixels

  10. Imaging Devices • Limitations • The M x N pixel image is a two-dimensional representation of a 3-dimensional scene • different 3-dimensional scenes can give rise to the same 2-dimensional image! • Measuring reflected light means that the measurement depends on the light source • the same scene can give rise to different images • introduces unwanted variability into the imaging process • Resolution means that a specific pixel may be measuring reflected light from different objects (e.g, at an edge) • spatial differences below the resolution will not be “seen” • Quantization means that very small differences in measured light intensity will not be recordable

  11. Displaying Images in MATLAB • Images are stored as two-dimensional arrays (as matrices) • image(i,j) is the intensity value of pixel(i,j) • the intensity is (usually) a positive number • larger values indicate brighter • darker values indicate darker • typically recorded as 8 bit or 16 bit integer for gray-scale, but actual number in the image array may be represented as a real >> whos Name Size Bytes Class image 90x100 72000 double array

  12. MATLAB exercise 1: image pixels • Download faceimage.mat, scale.m, dispimg.m from Web page • Load faceimage.mat into MATLAB • What are the dimensions of the image? • How many pixels in vertical and horizontal directions? • What are the maximum and minimum pixel values?

  13. Matlab Exercise 2: image display • Execute the following commands in MATLAB: >> image(faceimage); What do you see? >> scaled_image = scale_pixel_values(faceimage,1,256); What is the max and min pixel value now? >> image(scaled_image); Now what do you see? >> colormap(gray(256)); >> image(scaled_image); Now you should see a “regular” gray-scale image

  14. Image Display and Colormaps E.g. This pixel intensity = 212 INTENSITY IMAGE Colormap (Lookup Table) 212 is mapped to (R,G,B) screen-color(212) Note! One can change the colormap (e.g., increase the brightness of the displayed image) while leaving the intensity image unchanged DISPLAY IMAGE Screen-color(212) displayed on screen

  15. The image.m Function • Notes: • image uses the current colormap (whatever it is set to) • e.g., colormap(‘gray’), colormap(‘hot’) • we will work with gray-scale images of size M x N • color images have M x N x 3, with 1 intensity component for each of R, G, and B >> help image IMAGE Display image. IMAGE(C) displays matrix C as an image. Each element of C specifies the color of a rectilinear patch in the image. C can be a matrix of dimension MxN or MxNx3, and can contain double, uint8, or uint16 data.

  16. The dispimg.m function • Automatically scales the image intensities and sets the colormap function dispimg(img) % displays an intensity image (on any intensity scale) using a gray-scale colormap % with 256 colors. The intensity data are scaled to use the full colormap, i.e., % the darkest pixel is mapped to 1 and the brightest to 256 % first create a temporary image called y which maps the darkest intensity to % 1 and the brightest intensity to 256, using the utility function scale.m y = scale(img,-Inf,Inf,1,256); % now plot the image..... figure; % create a figure window image(y); % use image to display the scaled image colormap(gray(256)); % use a standard colormap

  17. Example of a Face Image >> dispimg(faceimage)

  18. Pixel Histogram for a Face Image >> hist(reshape(faceimage,120*128,1), 100)

  19. A Cross-section of an Image >> plot(faceimage(50,:))

  20. Example of an Image and Pixel Values

  21. Automated Image Analysis

  22. Multiple 3d interpretations are possible for a 2d image => visual ambiguity

  23. One Interpretation

  24. Another Interpretation

  25. Sources of Variation in Intensity Images • Assume we are looking at a specific object (e.g., a face) • Assume the light source, camera, etc., are fixed • There are two systematic sources of variation: • Object-specific variation • variations in reflectance • bright skin, dark hair, lips • shadows • nose, mouth, ears • texture variation • curly hair versus flat hair, etc • Viewpoint-specific variation (next slide)

  26. Viewpoint-specific Variation • This is the variation in the intensity values which arises from changes in the relative position of the camera and the face • it is not an “intrinsic property” of the face • so it typically gets in the way of recognition, classification • Scale: • distance of the face to the camera • Translation • relative position of the face in the image • e.g., centered or not • Orientation/Pose • angle of the face relative to the image • e.g., looking upwards or sideways • Deformation • the face is distorted from its “normal” position • smiling, shouting, etc

  27. Variations in face images • Consider an image of the face of George Bush • what are all the variations one could have in images of his face? • short-term variations: • scale(distance) • translation (position in image) • orientation (pose) • expression • lighting (day, night, shadows) • sunglasses • hair-style • longer-term variations • “weight” • beard, moustache • scar/injury

  28. Recognizing George Bush’s face • Images from Google Image Search • Note variations in • scale • face orientation • lighting • scene complexity, etc

  29. Different Lighting for the Same Face

  30. Image Classification • Say we want to build a system which outputs a 1 when George Bush is in the picture and a 0 otherwise • This is a classification problem: • 2 classes: • Class 1: “image contains George Bush” • Class 2: “image does not contain George Bush” • What could we use as features (the inputs to the classifier)? • The M x N pixels in each image (if all images are the same size) • Or we could use features derived from the image • location, size of face • relative position of eyes from nose, etc • this assumes we can find the face in the image

  31. Stages in Face Classification Face Locator INTENSITY IMAGE Feature Extraction Classifier Classification Decision

  32. Stages in Face Classification • Face Location: • find the set of pixels that look most like a face • What do if there are multiple faces? No faces? • Becomes harder as there is more variation in appearance (orientation, scale, etc) • For certain images (e.g., “mug shots”) we could use all pixels in the image and skip the step of locating the face • Feature Extraction • extract specific features for each region of interest (high-level) • e.g., shape of object, size of nose, relative position of eyes,etc • an alternative is to use the pixels directly as features (low-level) • Classification: • classify features into “face” or “not a face” • classifier is trained on training data • positive examples: images with the desired faces • negative examples: images without the desired faces

  33. Different types of Face Classification • Identification • Class i = ith person in the database, i goes from 1 to M • Class M+1 = everyone else • Binary version of Identification • Class 1 = “is in the database” (e.g., is an employee) • Class 2 = “is not in the database” (e.g., is not an employee) • Detection • class 1 = there is at least 1 face in the image • class 2 = there is no face in the image • Classification of face types • class 1 = male face, class 2 = female face • class 1 = has a beard, class 2 = does not have a beard

  34. Applications of Face Recognition • Security • automatic identification of a user for building entry, airport security • Retrieval and Annotation • news services have large databases of video/image data • would like to quickly be able to find “images of Bill Clinton with Gov. Arnold” • Online photo databases (e.g., Flickr) • Automatically annotate images (e.g., do they have faces or not?) • User Interfaces • recognition of user at terminal, personalized interface • recognition of human emotions • Handicapped Services • automated “lip-reading”, recognition of faces for blind people

  35. Recognizing George Bush’s face • Images from Google Image Search • Note variations in • scale • face orientation • lighting • scene complexity, etc

  36. Locating any Face in an Image • Images from Google Image Search • Note variations in • scale • face orientation • lighting • scene complexity, etc

  37. Is there a face image on this Web page?

  38. Is there a face image on this Web page?

  39. Is there a face image on this Web page?

  40. Is there a face image on this Web page?

  41. Which images contain human faces? Images from New York Times Web page, Oct 19th 2006 Note variety of types of images, lighting, scale, orientation

More Related