1 / 54

Entropy and some applications in image processing

Entropy and some applications in image processing. Neucimar J. Leite Institute of Computing neucimar@ic.unicamp.br. Outline. Introduction Intuitive understanding Entropy as global information Entropy as local information edge detection, texture analysis

mdales
Télécharger la présentation

Entropy and some applications in image processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Entropy and some applications in image processing Neucimar J. Leite Institute of Computing neucimar@ic.unicamp.br

  2. Outline • Introduction • Intuitive understanding • Entropy as global information • Entropy as local information • edge detection, texture analysis • Entropy as minimization/maximization constraints • global thresholding • deconvolution problem

  3. Information Entropy (Shannon´s entropy) An information theory concept closely related to the following question: • What is the minimum amount of data needed to represent an • information content? • For images (compression problems): • How few data are sufficient to completely describe an images • without (much) loss of information?

  4. Intuitive understanding: • relates the amount of uncertainty about an event with a given • probability distribution Event: randomly draw out a ball low uncertainty high uncertainty no uncertainty entropy = max min (uncertainty)

  5. Self-information: - Units of information used to represent an event E Example 1: Event: a coin flipping = { heads, tails } Probability: P(heads) = P(tails) = 1/2 0  heads 1  tails self-information: inversely related to the probability of E

  6. Example 2: amount of conveyed information of event E Entropy: average information

  7. coding the balls (3 bits/ball) 0 0 0 0 0 1 0 1 0 Degree of information compression: 0 1 1 equal length binary code 1 0 0 1 0 1 1 1 0 1 1 1 for independent data: Entropy: = 3 bits/ball

  8. code 0 0 0 1 medium uncertainty: 1 0 1 1 H= -( 5/8 log2(5/8) + 1/8 log2(1/8) + 1/8 log2(1/8)+ 1/8 log2(1/8) ) = 1.54 no uncertainty: H = -8log21 = 0

  9. 2 bits/ball > 1.54 bit/ball  code redundancy !!! and code 0 0 0 1 medium uncertainty: 1 0 1 1 H= -( 5/8 log2(5/8) + 1/8 log2(1/8) + 1/8 log2(1/8)+ 1/8 log2(1/8) ) = 1.54 22%  We need an encoding method for eliminating this code redundancy

  10. The Huffman encoding: Ball Probability Reduction 1 Reduction 2 red 5/8 black 1/8 1/8 blue 1/8 green

  11. Ball Probability Reduction 1 Reduction 2 5/8 red 5/8 2/8 black 1/8 1/8 1/8 blue 1/8 green

  12. Ball Probability Reduction 1 Reduction 2 5/8 5/8 red 5/8 3/8 2/8 black 1/8 1/8 1/8 blue 1/8 green

  13. Ball Probability Reduction 1 Reduction 2 (1) 5/8 5/8 red 5/8 (0) 3/8 2/8 black 1/8 1/8 1/8 blue 1/8 green

  14. Ball Probability Reduction 1 Reduction 2 (1) (1) 5/8 5/8 red 5/8 (01) (0) 3/8 2/8 black 1/8 1/8 1/8 blue (00) 1/8 green

  15. Ball Probability Reduction 1 Reduction 2 (1) (1) 5/8 5/8 (1) red 5/8 (01) (0) 3/8 2/8 (00) black 1/8 (011) 1/8 1/8 blue (00) 1/8 green (010) variable length code ball red 1 black 00 and blue 011 (18,6%) green 010

  16. 512 x 512 8-bit image: Entropy: 4.11 bits/pixel After Huffman encoding: Variable length coding does not take advantage of the high images pixel-to-pixel correlation:  a pixel can be predicted from the values of its neighbors  more redundancy  lower entropy (bits/pixel)

  17. Entropy: 7.45 After Huffman encoding: Entropy: 7.35 After Huffman encoding:

  18. Coding the interpixel difference  highlighting redundancies: Entropy: 4.73 instead of 7.45 After Huffman encoding: instead of 1.07 Entropy: 5.97 instead of 7.35 After Huffman encoding: instead of 1.08

  19. Entropy as a local information: the edge detection example

  20. Edge detection examples:

  21. Entropy-based edge detection • Low entropy values  low frequencies  uniform image regions • High entropy values  high frequencies  image edges

  22. Binary entropy function: 1.0 Entropy H p 0 0.5 1.0

  23. 1.0 Entropy H p 0 0.5 1.0

  24. 1.0 Entropy H p 0 0.5 1.0

  25. 1.0 Entropy H p 0 0.5 1.0

  26. 1.0 Entropy H p 0 0.5 1.0

  27. 1.0 Entropy H p 0 0.5 1.0

  28. Binary entropy function: Isotropic edge detection

  29. H in a 3x3 neighborhood:

  30. 5x5 neighborhood:

  31. 7x7 neighborhood:

  32. 9x9 neighborhood:

  33. Texture Analysis • Similarity grouping based on brightness, colors, slopes, sizes etc • The perceived patterns of lightness, directionality, coarseness, • regularity, etc can be used to describe and segment an image

  34. Texture description: statistical approach • Characterizes textures as smooth, coarse, periodic, etc - Based on the intensity histogram  prob. density function Descriptors examples: • Mean: a measure of average intensity p(zi) = the intensity histogram in a region zi = random variable denoting gray levels

  35. Other moments of different orders: - e.g., standard deviation: a measure of average contrast Entropy: a measure of randomness

  36. smooth coarse periodic

  37. Descriptors and segmentation: ?

  38. 0 1 2 3 4 0 0 0 0 0 0 1 0 4 2 1 0 2 0 3 3 2 0 3 0 1 2 3 0 4 0 0 0 0 0 Gray-level co-occurrence matrix: Haralick´s descriptors • Conveys information about the positions of pixels having • similar gray level values. Md(a,b) • 2 1 3 3 2 1 • 3 2 2 2 1 1 • 3 2 2 1 1 3 • 1 3 1 2 1 1 3 d=1

  39. Md = the probability that a pixel with gray level i will have a pixel with level j a distance of d pixels away in a given direction For the descriptor H: large empty spaces in M  little information content cluttered areas  large information content d = 2, horizontal direction

  40. Obviously, more complex texture analysis based on statistical descriptors should consider combination of information related to image scale, moments, contrast, homogeneity, directionality, etc

  41. Entropy as minimization/maximization constraints

  42. Global thresholding examples: histogram peaks mean

  43. For images with levels 0-255: The probability that a given pixel will have value less than or equal t is: Now considering: Class A: Class B:

  44. The optimal threshold is the value of t that maximizes where

  45. Examples:

  46. Entropy as a fuzziness measure In fuzzy set theory an element x belongs to a set S with a certain probability pxdefined by a membership function px(x) Example of a membership function for a given threshold t: px(x) gives the degree to which x belongs to the object or background with gray-level average and , respectively.

  47. How can the degree of fuzziness be measured? Example: t = 0 for a a binary image  fuzziness = 0

  48. Using the Shannon´s function (for two classes): the entropy of an entire fuzzy set of dimension MxN is and for segmentation purpose, the threshold t is such that E(t) is minimum  t minimizes fuzziness

  49. Segmentation examples

  50. Maximum Entropy Restoration: the deconvolution problem

More Related