1 / 33

The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS

The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS. Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source : IEEE Transactions on Image Processing, Vol. 9, No. 8, August 2000, 1309-1324 Speaker: Chia-Chun Wu ( 吳佳駿 )

Télécharger la présentation

The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The LOCO-I Lossless imageCompression Algorithm: Principlesand Standardization into JPEG-LS • Authors: M. J. Weinberger, G. Seroussi, G. Sapiro • Source : IEEE Transactions on Image Processing, Vol. 9, No. 8, August 2000, 1309-1324 • Speaker: Chia-Chun Wu (吳佳駿) • Date : 2004/10/20 NCHU

  2. Outline • 1. Introduction • 2. Example • 3. Modeler • 4. Regular mode • 5. Run mode • 6. Coded data • 7. Result • 8. Conclusion • 9. Comments NCHU

  3. 1. Introduction (1/2) LOCO-I (LOw COmplexity LOssless COmpression for Image) is the standard for lossless and near-lossless compression of continuous-tone images, such as JPEG-LS. NCHU

  4. 1. Introduction (2/2) 1100 0000… Fig.1 JPEG-LS:Block diagram NCHU

  5. 補 0 2. Example Fig.2 4 x 4 Example image data NCHU

  6. 3. Modeler • 3.1 Compute local gradients • 3.2 Local gradient quantization • 3.3 Quantized gradient merging • 3.4 Select the mode NCHU

  7. 3.1 Compute local gradients • Ix is the value of current sample in the image. • Three gradients g1, g2, g3. g1 = Rd – Rb g2 = Rb – Rc g3 = Rc – Ra • Example 1Example 2 (g1,g2,g3)=( 0,81,-36) (g1,g2,g3)=( 0, 0, 0) NCHU

  8. 3.2 Local gradient quantization • Q1, Q2, Q3 are region numbers of quantized local gradients. • Example 1Example 2 (g1,g2,g3) ⇒ (Q1,Q2,Q3) (g1, g2, g3)=( 0,81,-36) ⇒ (Q1,Q2,Q3)=( 0, 4, -4) (g1, g2, g3)=( 0, 0, 0) ⇒ (Q1,Q2,Q3)=( 0, 0, 0) NCHU

  9. 3.3 Quantized gradient merging • If the first non-zero element of vector (Q1,Q2,Q3) is negative, then (Q1,Q2,Q3) ⇒ (-Q1,-Q2,-Q3) and SIGN set to –1, otherwise it set to +1. • Example 1Example 2 (Q1, Q2, Q3)=( 0, 4, -4) ⇒ SIGN = +1 (Q1, Q2, Q3)=( 0, 0, 0) ⇒ SIGN = +1 NCHU

  10. 3.4 Select the mode • If quantized local gradients are not all zero, choose the regular mode. • If Q1=Q2=Q3=0, go to the run mode. • Example 1Example 2 (Q1, Q2, Q3)=( 0, 4, -4) ⇒ Reqular mode (Q1, Q2, Q3)=( 0, 0, 0) ⇒ Run mode NCHU

  11. 4. Regular model • 4.1 Compute the fixed prediction • 4.2 Adaptive correction • 4.3 Compute the prediction error • 4.4 Modulo reduction of the error • 4.5 Error mapping • 4.6 Compute the Golomb coding parameter k • 4.7 Golomb Code • 4.8 Mapped-error encoding • 4.9 Update the variables NCHU

  12. 4.1 Compute the fixed prediction • Ra, Rb, Rc used to predict Ix. • Px is predicted value for the sample Ix. • Example 1 { min(Ra,Rb), if Rc ≧ max(Ra,Rb). max(Ra,Rb), if Rc ≦ min(Ra,Rb). Ra+Rb–Rc, otherwise. Px = Rc = 64≦min(100,145)≦100 ⇒ Px = max(100,145) = 145 NCHU

  13. { Px + C, if SIGN = +1. Px ─ C, if SIGN = ─1. Px = [ ] [ ] -1 2 B N 4.2 Adaptive correction • C = , C is the prediction of correction value. • Example 1 B = -1, N = 2 ⇒ C = = -1 Px = 145, SIGN = +1 ⇒ Px = Px + C = 145 + (-1) = 144 To P23 NCHU

  14. 4.3 Compute the prediction error • Errval is the prediction error. • Example 1 1. Errval = Ix – Px. 2. Errval = ─ Errval, if SIGN = ─1. Ix = 145 Px = 144 SIGN = +1 ⇒ Errval = Ix - Px = 145 ─ 144 = 1 NCHU

  15. 4.4 Modulo reduction of the error • The prediction error will be reduced to the range relevant for coding,-127~+128. • Example 1 1. Errval = Errval + 256, if Errval < 0. 1. Errval = Errval ─ 256, if Errval ≧ 128. Errval = 1 (1 > 0 and 1 ≦ 128) ⇒ Errval = 1 NCHU

  16. { 2*Errval, if Errval ≧ 0. ─2*Errval-1, if Errval < 0. MErrval = 4.5 Error mapping (1/2) • The prediction error, Errval will be mapped to a non-negative value. • MErrval is the Errval mapped to non-negative integers in regular mode. • Example 1 Errval = 1 (1 ≧ 0) ⇒ MErrval = 2 * 1 = 2 NCHU

  17. 127 -128 0 0 255 4.5 Error mapping (2/2) Prediction error 0 -1 1 -2 2 -3 3 … 127 -128 ↓ ↓ ↓ ↓ ↓↓ ↓ ↓ ↓ 0 1 2 3 4 5 6 … 254 255 Mapped value NCHU

  18. 4.6 Compute the Golomb coding parameter k • K is the Golomb coding parameter for regular mode. k= min {k’|2k’*N≧A} • Example 1 N = 2, A = 64 ⇒2k’ * 2 ≧ 64 ⇒ 2k’ ≧ 32 ⇒ k = 5 NCHU

  19. 4.7 Golomb Code (1/3) • A formula: MErrval = q * m + r • A parameter:m (m = 2k) • Two parts:unary code (q) and modified binary code (r) • Example: MErrval = 13, k = 2 ⇒ m = 22 = 4 ⇒ 13 = 3 x 4 + 1 ⇒ q = 3, r = 1 ⇒ unary code=3, modified binary code=1 ⇒ unary code=000, modified binary code=01 ⇒ 000101 NCHU

  20. 4.7 Golomb Code (2/3) Table.Ⅰ Golomb code for m = 4 (k=2). NCHU

  21. 0 255 4.7 Golomb Code (3/3) • Properties - n↓⇒ code length↓ - one pass encoding - without to store the code tables - Golomb code are optimal for one sided geometric distributions of nonnegative integers NCHU

  22. 4.8 Mapped-error encoding • Example 1 k = 5 ⇒ m = 2k = 25 = 32 MErrval = 2 ⇒ 2 = q * m + r = 0 * 32 + 2 ⇒ q = 0, r = 2 ⇒ unary code= 0, modified binary code=2 ⇒ unary code= null, modified binary code=00010 ⇒100010 NCHU

  23. { A = A + | Errval | B = B + Errval N = N + 1 4.9 Update the variables (1/2) • The variables A, B and N are updated according to the current prediction error. • A, B are counters for the accumulated prediction error. • N is counter for frequency of occurrence of the context. NCHU

  24. 4.9 Update the variables (2/2) • The variables before encoding are: A = 64, B = -1, N = 2 . • The variables after updating are: Example 1: Errval = 1 ⇒A = A + | Errval | = 64 + 1 = 65. B = B + Errval = -1 + 1 = 0. N = N + 1 = 2 + 1 = 3. To P13 NCHU

  25. 5. Run model • 5.1 Run scanning • 5.2 Run-length coding NCHU

  26. 5.1 Run scanning • RUNval is the value of repetitived sample. • RUNcnt is the value of repetitived sample count for run mode. • Example 2 RUNval = Ra = 145 Ix = 145 = RUNval ⇒RUNcnt = 2 { RUNval = Ra; while (Ix == RUNval) { RUNcnt = RUNcnt + 1; } NCHU

  27. 5.2 Run-length coding • RUNcnt is the value represents the run-length. • Example 2 RUNcnt = 2 ⇒11 { while (RUNcnt >0) { Append 1 to bit stream; RUNcnt = RUNcnt ─ 1; } NCHU

  28. 6. Coded data Table.Ⅱ Coded segment. PS:The last five bits in the above table are padding with 0. NCHU

  29. 7. Results (1/2) Table Ⅲ Compression Results On ISO/IEC 10918-1 Image Test Set (In Bits/Sample) NCHU

  30. 7. Results (2/2) Table Ⅳ Compression Results On New Image Test Set (In Bits/Sample) NCHU

  31. 8. Conclusion • LOCO-I/JPEG-LS significantly outperforms other schemes of comparable complexity (e.g.,JPEG-Huffman), and it attains compression rations similar or superior to those of higher complexity schemes based on arithmetic coding (e.g.,JPEG-Arithm, CALIC Arithm). • LOCO-I performed within a few percentage points of the best available compression ratios (given, in practice, by CALIC), at a much lower complexity level. NCHU

  32. 9. Comments • 找出一個方法去修改JPEG-LS的壓縮演算法,使其具有資訊隱藏的功能。 NCHU

  33. The end Thank you!! NCHU

More Related