1 / 128

Image Compression (Chapter 8)

Image Compression (Chapter 8). CS474/674 – Prof. Bebis. Goal of Image Compression. Digital images require huge amounts of space for storage and large bandwidths for transmission. A 640 x 480 color image requires close to 1MB of space.

tilly
Télécharger la présentation

Image Compression (Chapter 8)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Compression (Chapter 8) CS474/674 – Prof. Bebis

  2. Goal of Image Compression • Digital images require huge amounts of space for storage and large bandwidths for transmission. • A 640 x 480 color image requires close to 1MB of space. • The goal of image compression is to reduce the amount of data required to represent a digital image. • Reduce storage requirements and increase transmission rates.

  3. Approaches • Lossless • Information preserving • Low compression ratios • Lossy • Not information preserving • High compression ratios • Trade-off: image quality vs compression ratio

  4. Data ≠ Information • Data and information are not synonymous terms! • Datais the means by which information is conveyed. • Data compression aims to reduce the amount of data required to represent a given quantity of information while preserving as much information as possible.

  5. Data vs Information (cont’d) • The same amount of information can be represented by various amount of data, e.g.: Ex1: Your wife, Helen, will meet you at Logan Airport in Boston at 5 minutes past 6:00 pm tomorrow night Your wife will meet you at Logan Airport at 5 minutes past 6:00 pm tomorrow night Helen will meet you at Logan at 6:00 pm tomorrow night Ex2: Ex3:

  6. Data Redundancy compression Compression ratio:

  7. Data Redundancy (cont’d) • Relative data redundancy: Example:

  8. Types of Data Redundancy (1) Coding (2) Interpixel (3) Psychovisual • Compression attempts to reduce one or more of these redundancy types.

  9. Coding Redundancy Code: a list of symbols (letters, numbers, bits etc.) Code word: a sequence of symbols used to represent a piece of information or an event (e.g., gray levels). Code word length: number of symbols in each code word

  10. Coding Redundancy (cont’d) N x M image rk: k-th gray level P(rk): probability of rk l(rk): # of bits for rk Expected value:

  11. Coding Redundancy (con’d) • l(rk) = constant length Example:

  12. Coding Redundancy (cont’d) • l(rk) = variable length • Consider the probability of the gray levels: variable length

  13. Interpixel redundancy • Interpixel redundancy implies that any pixel value can be reasonably predicted by its neighbors (i.e., correlated). autocorrelation: f(x)=g(x)

  14. Interpixel redundancy (cont’d) • To reduce interpixelredundnacy, the data must be transformed in another format (i.e., through a transformation) • e.g., thresholding, differences between adjacent pixels, DFT • Example: (profile – line 100) original threshold thresholded (1+10) bits/pair

  15. Psychovisual redundancy • The human eye does not respond with equal sensitivity to all visual information. • It is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum. • Idea: discard data that is perceptually insignificant!

  16. Psychovisual redundancy (cont’d) Example: quantization 256 gray levels 16 gray levels 16 gray levels i.e., add to each pixel a small pseudo-random number prior to quantization C=8/4 = 2:1

  17. How do we measure information? • What is the information content of a message/image? • What is the minimum amount of data that is sufficient to describe completely an image without loss of information?

  18. Modeling Information • Information generation is assumed to be a probabilistic process. • Idea: associate information with probability! A random event E withprobability P(E) contains: Note: I(E)=0 when P(E)=1

  19. How much information does a pixel contain? • Suppose that gray level values are generated by a random variable, then rk contains: units of information!

  20. How much information does an image contain? • Average information content of an image: using Entropy units/pixel (assumes statistically independent random events)

  21. Redundancy (revisited) • Redundancy: where: Note: of Lavg= H, the R=0 (no redundancy)

  22. Entropy Estimation • It is not easy to estimate H reliably! image

  23. Entropy Estimation (cont’d) First order estimate of H:

  24. Estimating Entropy (cont’d) • Second order estimate of H: • Use relative frequencies of pixel blocks : image

  25. Estimating Entropy (cont’d) • The first-order estimate provides only a lower-bound on the compression that can be achieved. • Differences between higher-order estimates of entropy and the first-order estimate indicate the presence of interpixel redundancy! Need to apply transformations!

  26. Estimating Entropy (cont’d) • For example, consider differences: 16

  27. Estimating Entropy (cont’d) • Entropy of difference image: • Better than before (i.e., H=1.81 for original image) • However, a better transformation could be found since:

  28. Image Compression Model

  29. Image Compression Model (cont’d) • Mapper: transforms input data in a way that facilitates reduction of interpixel redundancies.

  30. Image Compression Model (cont’d) Quantizer: reduces the accuracy of the mapper’s output in accordance with some pre-established fidelity criteria.

  31. Image Compression Model (cont’d) Symbol encoder: assigns the shortest code to the most frequently occurring output values.

  32. Image Compression Models (cont’d) • Inverse operations are performed. • But … quantization is irreversible in general.

  33. Fidelity Criteria How close is to ? Criteria Subjective: based on human observers Objective: mathematically defined criteria

  34. Subjective Fidelity Criteria

  35. Objective Fidelity Criteria Root mean square error (RMS) Mean-square signal-to-noise ratio (SNR)

  36. Objective Fidelity Criteria (cont’d) RMSE = 5.17 RMSE = 15.67 RMSE = 14.17

  37. Lossless Compression

  38. Lossless Methods: Taxonomy

  39. Huffman Coding (coding redundancy) • A variable-length coding technique. • Optimal code (i.e., minimizes the number of code symbols per source symbol). • Assumption: symbols are encoded one at a time!

  40. Huffman Coding (cont’d) • Forward Pass • 1. Sort probabilities per symbol • 2. Combine the lowest two probabilities • 3. Repeat Step2 until only two probabilities remain.

  41. Huffman Coding (cont’d) • Backward Pass Assign code symbols going backwards

  42. Huffman Coding (cont’d) • Lavg using Huffman coding: • Lavg assuming binary codes:

  43. Huffman Coding/Decoding • After the code has been created, coding/decoding can be implemented using a look-up table. • Note that decoding is done unambiguously.

  44. Arithmetic (or Range) Coding (coding redundancy) • No assumption on encode source symbols one at a time. • Sequences of source symbols are encoded together. • There is no one-to-one correspondence between source symbols and code words. • Slower than Huffman coding but typically achieves better compression.

  45. Arithmetic Coding (cont’d) • A sequence of source symbols is assigned a single arithmetic code word which corresponds to a sub-interval in [0,1]. • As the number of symbols in the message increases, the interval used to represent it becomes smaller. • Smaller intervals require more information units (i.e., bits) to be represented.

  46. 0 1 Arithmetic Coding (cont’d) Encode message: a1 a2 a3 a3 a4 1) Assume message occupies [0, 1) 2) Subdivide [0, 1) based on the probability of αi 3) Update interval by processing source symbols

  47. Example Encode a1 a2 a3 a3 a4 [0.06752, 0.0688) or, 0.068

  48. Example • The message a1 a2 a3 a3 a4is encoded using 3 decimal digits or 3/5 = 0.6 decimal digits per source symbol. • The entropy of this message is: Note: finite precision arithmetic might cause problems due to truncations! -(3 x 0.2log10(0.2)+0.4log10(0.4))=0.5786 digits/symbol

  49. Arithmetic Decoding 1.0 0.8 0.72 0.592 0.5728 a4 0.5856 0.57152 0.8 0.72 0.688 Decode 0.572 a3 0.5728 056896 0.4 0.56 0.624 a2 a3 a3 a1 a2 a4 0.2 0.48 0.592 0.5664 0.56768 a1 0.0 0.4 0.56 0.56 0.5664

  50. LZW Coding (interpixel redundancy) • Requires no priori knowledge of pixel probability distribution values. • Assigns fixed length code words to variable length sequences. • Patented Algorithm US 4,558,302 • Included in GIF and TIFF and PDF file formats

More Related