1 / 61

Two-Dimensional Wavelets

Two-Dimensional Wavelets. ECE 802 Spring 2010. Two-Dimensional Wavelets. For image processing applications we need wavelets that are two-dimensional. This problem reduces down to designing 2D filters.

ailis
Télécharger la présentation

Two-Dimensional Wavelets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Two-Dimensional Wavelets ECE 802 Spring 2010

  2. Two-Dimensional Wavelets • For image processing applications we need wavelets that are two-dimensional. • This problem reduces down to designing 2D filters. • We will focus on a particular class of 2D filters: separable filters (can be directly designed from their 1D counterparts)

  3. 2D Scaling Functions • The theories of multiresolution analysis and wavelets can be generalized to higher dimensions. • In practice the usual choice for a two-dimensional scaling function or wavelet is a product of two one-dimensional functions. For example, and the dilation equation assumes the form:

  4. 2D Wavelet Functions • Since both satisfy the dilation equation, • We can analogously construct the wavelets. However, now instead of 1 wavelet function, we have 3 wavelet functions:

  5. The corresponding dilation equations are: where g(I) (k,l)=h(k)g(l), g(II) (k,l)=g(k)h(l), g(III) (k,l)=g(k)g(l).

  6. Example: 2D Haar • 2D Haar scaling: • 2D Haar wavelets:

  7. Subspaces • V2j is the two-dimensional subspace at scale j. • As j increases, we get L2(R2).

  8. 2D wavelet decomposition • The approximation and detail coefficients are computed in a similar way: • The reconstruction is:

  9. Filterbank Structure: Decomposition

  10. Filterbank Structure: Reconstruction

  11. The Three Frequency Channels We can interpret the decomposition as a breakdown of the signal into spatially oriented frequency channels. Arrangement of wavelet representations Decomposition of frequency support

  12. Wavelet Decomposition:Example LENA

  13. Wavelet Example 2

  14. Applications: Edge Detection in Images

  15. Noisy Image: Denoised Image: Application: Image Denoising Using Wavelets

  16. Denoising Images • Denoising Daubechies’ face: • Transform the image to the wavelet domain using Coiflets with three vanishing moments • Apply a threshold at two standard deviations • Inverse-transform the image.

  17. Image Denoising Using Wavelets • Calculate the DWT of the image. • Threshold the wavelet coefficients. The threshold may be universal or subband adaptive. • Compute the IDWT to get the denoised estimate. • Soft thresholding is used in the different thresholding methods. Visually more pleasing images.

  18. VisuShrink • Apply Donoho’s universal threshold, • M is the number of pixels. • The threshold is usually high, overly smoothing.

  19. SUREShrink • Subband adaptive, a different threshold is calculated for each detail subband. • Choose the threshold that will minimize the unbiased estimate of the risk: • This optimization is straightforward, order the wavelet coefficients in terms of magnitude and choose the threshold as the wavelet coefficient that minimizes the risk.

  20. BayesShrink • Adaptive data-driven thresholding method • Assume that the wavelet coefficients in each subband is distributed as a Generalized Gaussian Distribution (GGD) • Find the threshold that minimized the Bayesian risk.

  21. GGD • Shape parameter β, std. σ.

  22. BayesShrink • Choose the threshold that will minimize the Bayesian risk. • There is no closed form for the threshold.

  23. BayesShrink Empirical Threshold • An empirical threshold is used in practice that is very close to the optimum threshold. • Adapts to the SNR in each subband.

  24. Comparisons

  25. Image Enhancement • Image contrast enhancement with wavelets, especially important in medical imaging • Make the small coefficients very small and the large coefficients very large. • Apply a nonlinear mapping function to the coefficients.

  26. Experiments

  27. Denoising and Enhancement • Apply DWT • Shrink transform coefficients in finer scales to reduce the effect of noise • Emphasize features within a certain range using a nonlinear mapping function • Perform IDWT to reconstruct the image.

  28. Examples Original Denoised Denoising with Enhancement

  29. Edge Detection • Edges correspond to the singularities in the image and are related to the local maxima of wavelet coefficients. • For edge detection, a smoothing function (such as a spline) and two wavelet functions are defined. • Wavelet functions are usually the first and second order derivatives of the smoothing function. • Examples: • Keep the detail coefficients and discard the approximation coefficients • Edges correspond to large coefficients

  30. Applications • Computer vision • Image processing in the human visual system has a complicated hierarchical structure that involves several layers of processing. • At each processing level, the retinal system provides a visual representation that scales progressively in a geometrical manner. • Intensity changes occur at different scales in an image, so that their optimal detection requires the use of operators of different sizes. • Therefore, a vision filter have two characteristics: it should be a differential operator, and it should be capable of being tuned to act at any desired scale. • Wavelets are ideal for this

  31. FBI Fingerprint Compression • A single fingerprint is about 700,000 pixels, and requires about 0.6MBytes.

  32. Image Compression and Wavelets

  33. Why Compression? • Uncompressed images take too much space, require larger bandwidth for transmission and longer time to transmit • Examples: • 512x512 grayscale image: 262KB • 512x512 color image: 786KB • The common principle beyond compression is to reduce redundancy: spatial and spectral redundancy

  34. Types of Compression • Lossy vs. Lossless: Lossy compression discards redundant information, achieves higher compression ratios. Lossless compression can reconstruct the original image. • Predictive vs. Transform Coding

  35. Components of a Coder • Source Encoder: Transform the image • DFT,DCT,DWT (linear transforms) • Quantizer: Scalar vs. Vector (lossy coding) • Entropy Encoder: Compresses the quantized values (lossless)

  36. Original JPEG • Use DCT to transform the image (real part of DFT)

  37. Original JPEG • Transform each 8x8 block using DCT • Since adjacent pixels are highly correlated, most of the coefficients are concentrated at lower frequencies. • Quantize the DCT coefficients (uniform quantization) and then entropy encode for further compression

  38. DCT based JPEG uses blocks of image, there is still correlation across blocks. Block boundaries are noticeable in some cases Blocking artifacts at low bit rates Can overlap the blocks Computationally expensive Disadvantages of DCT: Why wavelets?

  39. Was JPEG not good enough? • JPEG is based on DCT. • Equal subbands. • At low bit rates, there is a sharp degradation with image quality. • 43:1 compression ratio

  40. Why Wavelets? • No need to block the image • More robust under transmission errors • Facilitates progressive transmission of the image (Scalability)

  41. Features of JPEG2000 • Multiple Resolution: Decomposes the image into a multiple resolution representation. • Progressive transmission: By pixel and resolution accuracy, referred to as progressive decoding and signal-to-noise ratio (SNR) scalability: This way, after a smaller part of the whole file has been received, the viewer can see a lower quality version of the final picture. • Lossless and lossy compression • Random code-stream access and processing: JPEG 2000 supports spatial random access or region of interest access at varying degrees of granularity. This way it is possible to store different parts of the same picture using different quality. • Error resilience: JPEG 2000 is robust to bit errors introduced by noisy communication channels, due to the coding of data in relatively small independent blocks.

  42. JPEG2000Basics General block diagram of the JPEG 2000 (a) encoder and (b) decoder

  43. Wavelets in Image Coding • Orthogonal vs. Biorthogonal: • JPEG 2000 uses biorthogonal filters • Lossless and lossy compression • Cohen-Daubechies-Feavau filters 9/7 • CDF 5/3 for lossless compression (integer) • Filters are symmetric/anti-symmetric • Nearly orthogonal • Symmetric extensions of the input data

  44. Steps in JPEG2000 • Tiling: The image is split into tiles, rectangular regions of the image that are transformed and encoded separately. Tiles can be any size. Dividing the image into tiles is advantageous in that the decoder will need less memory to decode the image and it can opt to decode only selected tiles to achieve a partial decoding of the image. Using many tiles can create a blocking effect. • Wavelet Transform: Either CDF 9/7 or CDF 5/3 biorthogonal wavelet transform. • Quantization: Scalar quantization • Coding: The quantized subbands are split into precincts, rectangular regions in the wavelet domain. They are selected in a way that the coefficients within them across the sub-bands form approximately spatial blocks in the image domain. Precincts are split further into code blocks. Code blocks are located in a single sub-band and have equal sizes. The encoder has to encode the bits of all quantized coefficients of a code block, starting with the most significant bits and progressing to less significant bits by EBCOT scheme.

  45. LL3 HL3 HL2 LH3 HH3 HL1 LH2 HH2 LH1 HH1 DWT for Image Compression • Image Decomposition • Parent • Children • Descendants: corresponding coeff. at finer scales • Ancestors: corresponding coeff. at coarser scales • Parent-children dependencies of subbands: arrow points from the subband of parents to the subband of children.

  46. LL3 HL3 HL2 LH3 HH3 HL1 LH2 HH2 LH1 HH1 DWT for Image Compression • Image Decomposition • Feature 1: • Energy distribution concentrated in low frequencies • Feature 2: • Spatial self-similarity across subbands The scanning order of the subbands for encoding the significance map.

  47. DWT for Image Compression • Differences from DCT Technique • In conventional transform coding: • Anomaly (edge) produces many nonzero coeff. insignificant energy • TC allocates too many bits to “trend”, few bits left to “anomalies” • Problem at Very Low Bit-rate Coding : block artifacts • DWT • Trends & anomalies information available • Major difficulty: fine detail coefficients associated with anomalies  the largest no. of coeff. • Problem: how to efficiently represent position information?

  48. The zerotree is based on the hypothesis that if a wavelet coefficient at a coarse scale is insignificant, then all wavelet coefficients of the same orientation in the same spatial location at finer scales are likely to be insignificant. Natural images in general have a low pass spectrum. When an image is wavelet transformed, the energy in the sub-bands decreases with the scale goes higher so the wavelet coefficient will, on average, be smaller in the higher levels. Embedded Zerotree Wavelet Compression (EZW)

More Related