1 / 97

Image processing Image and image sequence compression bases

Image processing Image and image sequence compression bases. Session 2005-2006. Ecole Centrale Marseille. Goal. Give in 2 hours some technics of compression Introduction of mathematical tools Based on information theory. Plan. I) Introduction II) Tools III) Lossless compression

ajay
Télécharger la présentation

Image processing Image and image sequence compression bases

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image processing Image and image sequence compression bases Session 2005-2006 Ecole Centrale Marseille

  2. Goal • Give in 2 hours some technics of compression • Introduction of mathematical toolsBased on information theory

  3. Plan I) Introduction II) Tools III) Lossless compression (redundancy) IV) Lossy compression scalar quatization vector quatization V) Norms

  4. Digital images and sequences L L C C Image Sequence Different formats or standards L xC pixels by image RGB,YUV(NTSC,PAL,SECAM) 8 bits / pixel

  5. Formats RGB : red, green blue YUV : Y black and white component U, V color information YCbCr : scaled and offset versions of YUV

  6. Compression : why ? Problem Amount of data Transmission Decrease de bitrate with compression technics Elimination of redundancy Quantization ... Storage

  7. Amount of datas Exemple of format PAL itu-r 601 Resolution : 720 columns and 576 rows YCbCr : 4:2:2 Interlaced (frame ou trame  image) Y = 720 x 576 = 414720 CbCr = 2 x 360 x 576 = 414720 Total = 829440 bytes / image Si 25 images / sec (50 trames / sec) Total = 20,7 Mb / sec

  8. Evolution Increase of quality, increase the size of images, a lot of images ... VISION More and more data

  9. Compression Data compression addresses the problem of reducing the amount of data required to represent given quantity of information NxM pixels Examp l e x0 y0 M Coordinates, radius, gray levels (5 values) N

  10. Plan I) Introduction II) Tools

  11. Transmission of information Mathématical theory of communication W. Weaver et C.E. Shannon source of information destination message message signal source of noise transmitter receiver

  12. compression system model ^ Source encoder Channel encoder Channel Channel decoder Source decoder f f noise encoder decoder exemple Symbol encoder mapper Inverse mapper quantizer Symbol decoder . . . Transform : DCT,FFT,WT SQ,VQ

  13. f(x,y) is fed into the encoder which creates a set of symbols from the input data ^ f f May or may not be an exact replica of ^ f f = Error free (lossless compression) Lossy compression ^  f f

  14. Measures Quality : error between original signal and received signal Bit rate : quantity of bits Possibility : complexity of the system (software or hardware)

  15. Fidélity criteria

  16. Fidélity criteria where 255 is the maximum value of the signal

  17. Fidélity criteria Subjective Value Rating 1 excellent extremely high quality 2 fine high quality 3 passable 4 marginal 5 inferior very poor image 6 unusable can’t watch it

  18. Measuring information A random even E that occurs with probability P(E) is said to contains Units of info Self information P(E)=1 event always occurs P(E)=0 event never occurs P(E)=0.999 ==> small information Log2 bits

  19. Measuring information The average information per source output is ENTROPY H   uncertainty   information  Shannon

  20. Definitions N1 bits before compression N2 bits after compression Compression ratio Relative data redundancy

  21. Technics Image compression Sequence compression Lossless compression : quality is preserved low compression Lossy compression : quality decreases high compression

  22. cameraman Cam10.jpeg 200 623

  23. Cam99.jpg Cam50.jpg 625 625

  24. Plan I) Introduction II) Tools III) Lossless compression (redundancy)

  25. Error free compression Redundancy reduction - coding reduction -interpixel redundancy dpcm,... - psychovisual redundancy

  26. Code length (Longueur de code) Code 1  L=3 Code 2 

  27. Huffman coding Assignes fewer bits to the more probable gray levels than to the less probable ===> data compression Most popular Variable-length code Code optimal, code unique

  28. Huffman coding Two steps : 1/ create a series of source reductions by ordering the probabilities of the symbols under consideration and combining the lowest proba symbols into a single symbol that replace them in the next source reduction 2/ code each reduced source, starting with the smallest source and working back to the original source. (start with 0 or 1 arbitrarily)

  29. Example Lavr = 0.4 x 1 + 0.3 x 2 + 0.1 x 3 + 0.1 x 3 + 0.1 x 4 + 0.06 x 5 + 0.04 x 5 = 2.2 bits/symbol Ex : 010100111100 means : a3 a1 a2 a2 a6

  30. Arithmetic coding Un mot de code (nombre) pour une suite de symboles Precision pb

  31. Run length coding Represent each row of an image or bit plane by sequences of lengths that describe successive runs of black and white pixels. fax, standard compression,... Basic concept : code each group of O ’s (1 ’s) encountered in a left to right scan of a row by its lenght. (pattern recognition)

  32. Rlc : example Original data 5 5 10 10 10 6 6 6 6 6 6 18 9 13 bytes 10 bytes 2 5 3 10 6 6 1 18 1 9 Compressed data (10/13=0.77)

  33. Bit plane coding Decompose a multilevel image into a serie of binary images and compress each image (binary) via one of several well known binary compression method Ex : code each group of O ’s (1 ’s) encountered in a left to right scan of a row by its length. 8 binary images 8 bits image

  34. Lemple-Ziv and Welch coding Pattern recognition If a sequence of non identical data values is repeated more than once in the data stream, then this repetition can be exploited to compress the data. 5 10 7 10 6 5 10 7 10 6 10 Original data Compressed data Ca Cb Ca Cb 10

  35. Predictive coding Simple predictive method 2:1 DCT i x x x x j 0 no prediction 1 A 2 B 3 C 4 A+B-C 5 A+(B-C)/2 6 B+(A-C)/2 7 (A+B)/2

  36. Plan I) Introduction II) Tools III) Lossless compression (redundancy) IV) Lossy compression transform quantisation scalar quantisation vector quantisation

  37. Compression system compression coding transform code binary flow coefficients index image ! Attention au mot codage en anglais et français

  38. Statistics

  39. Transform coding Source image reorder data same information Entropy encoder T quantizer DCT,FFT,WT SQ,VQ 8x8 blocs Compressed image Table specifications Table specifications T-1 Entropy decoder dequantizer Table specifications Table specifications Reconstructed image

  40. Transform coding A transform is used to map the image into a set of transform coefficientswhich are then quantized and coded For natural images, a significant number of coefficients have small magnitudes and can be coarsely quantized (set to zero) without significant distortion. If information of image is well organised ! Goal : decorrelate pixels. Pack as much info as possible into the smallest number of pixels Compression is not achieved during the transform !

  41. JPEG

  42. Discrete Fourier Transform FDCT 8x8 bloc IDCT Source image

  43. Quantizers Scalar quantizer : one pixel at a time Vector quantizer : group of pixels

  44. Scalar quantization Consider the input x . As a matter of convention x is considered to be mapped toyiR if x lies in ]sj, si+1]. yi belongs to a finite set of L values X= 12.46 Y=12 Degradation of signal

  45. Q output (x) y 7 y 6 y 5 x x x x 0 2 3 1 y x x x x x 4 4 5 6 7 Input y 3 y 2 y 1 Definition Uniform scalar quantizer x1 x2 x3 xN-2 xN-1 Y1 Y2 Y3 YN-1 YN Quantizer design problem select the best Xi and Yj for a particular optimization criterion and pdf of input Decision levels Reconstruction levels

  46. Uniform optimal quantizer (1D) Lloyd-Max quantizer Optimum mean square Criterion : minimise rms for a fixed number of quantization levels Reconstruction levels are the centroids of areas under p(x) over specified decision interval Decision levels are half way between reconstruction levels

  47. Caracteristic Y Y8 x8 x1 x Quantizer error : difference between input and the reproduction level Y1 e x8 x1 x (Truncation error if input is out of range)

  48. Quantization noise Assume e step e quantization error (xi-yi) Hyp : e small and xi equally distributed in [-e/2,e/2] p(e) p(e) is uniform in [-e/2,e/2] 1/e e/2 -e/2 e Noise power

  49. Signal to noise ratio Example : sine wave with amplitude Vm Power of signal Quantization noise

More Related