180 likes | 196 Vues
Processing Digital Images. Processing Digital Images. Filtering Analysis Recognition Transmission. Filtering. digital images are often processed using “digital filters” digital filters are based on mathematical functions that operate on the pixels of the image. Filtering.
E N D
Processing Digital Images • Filtering • Analysis • Recognition • Transmission
Filtering • digital images are often processed using “digital filters” • digital filters are based on mathematical functions that operate on the pixels of the image
Filtering • there are two classes of digital filters: global and local • global filters transform each pixel uniformly according to the function regardless of its location in the image • local filters transform a pixel depending upon its relation to surrounding ones
Global Filters • Brightness and Contrast control • Histogram thesholding • Histogram stretching or equalization • Color corrections • Hue-shifting and colorizing • Inversions
Local Filters • Sharpening • Blurring • Unsharp Masking • Edge and line detection • Noise filters
Local Filters • Edge and line detection filters subtract all parts of the image except edges or boundaries between two different regions • edge detection is often used to recognized objects of interest in the image edges and lines detected in an image of toy blocks
Analysis • Image improvement • Eliminating noise (due to external effects or missing pixels), or by increasing the contrast • Pattern Discovery and Recognition • OCR – Optical Character Recognition • Scene Analysis and Computer Vision • Recognition and reconstruction of 3D models of the scene. • Industrial robot that measures the relative sizes, shapes, positions, and color of the objects.
Image Properties • Color • Use color histogram • Texture • Surface structure • Use gray-level representation • Edge detection
Formatting and Conditioning • Image Formatting • Image Formatting means capturing an image by bringing it into a digital form • Conditioning • In an image, there are usually features which are uninteresting, either because they were introduced into the image during the digitization process as noise, or because they form part of a background. An observed image is composed of informative patterns modified by uninteresting random variations. Conditioning suppresses, or normalizes, the uninteresting variations in the image, effectively highlighting the interesting parts of the image.
Labeling • Informative patterns in an image have structure. • Patterns are usually composed of adjacent pixels which share some property such that it can be inferred that they are part of the same structure (e.g., an edge). • Edge detection techniques focus on identifying continuous adjacent pixels which differ greatly in intensity or colour, because these are likely to mark boundaries, between objects, or an object and the background, and hence form an edge. After the edge detection process is complete, many edge will have been identified. However, not all of the edges are significant. • Thesholding filters out insignificant edges. The remaining edges are labeled. More complex labeling operations may involve identifying and labeling shape primitives and corner finding.
Grouping • Grouping can turn edges into lines by determining that different edges belong to the same spatial event. The first 3 operations represent the image as a digital image data structure (pixel information), however, from the grouping operation the data structure needs also to record the spatial events to which each pixel belongs. This information is stored in a logical data structure.
Extracting • Grouping only records the spatial event(s) to which pixels belong. Feature extraction involves generating a list of properties for each set of pixels in a spatial event. • These may include a set's centroid, area, orientation, spatial moments, grey tone moments, spatial-grey tone moments, circumscribing circle, inscribing circle, etc. Additionally properties depend on whether the group is considered a region or an arc. If it is a region, then the number of holes might be useful. In the case of an arc, the average curvature of the arc might be useful to know. • Feature extraction can also describe the topographical relationships between different groups. Do they touch? Does one occlude another? Where are they in relation to each other? etc.
Matching • Finally, once the pixels in the image have been grouped into objects and the relationship between the different objects has been determined, the final step is to recognise the objects in the image. • Matching involves comparing each object in the image with previously stored models and determining the best match template matching.
Transmission • Transmission through network • Formats • Raw Digital Image • Compressed Digital Image • Symbolic Representation
Editing Images • editing or retouching an image involves selecting a region of the digital image for processing using some special effect • image compositing combines components of two or more images into a single image • painting (or rotoscoping) an image is to edit the image by hand with graphic tools that alter color and details
Editing Images • compositing images involves combining separate image layers into one image • layers may be moved and arranged