1 / 36

Variable Length Coding

Variable Length Coding. Information entropy Huffman code vs. arithmetic code Arithmetic coding Why CABAC? Rescaling and integer arithmetic coding Golomb codes Binary arithmetic coding CABAC. Information Entropy.

milek
Télécharger la présentation

Variable Length Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variable Length Coding • Information entropy • Huffman code vs. arithmetic code • Arithmetic coding • Why CABAC? • Rescalingandinteger arithmetic coding • Golomb codes • Binary arithmetic coding • CABAC VLC 2008 PART 1

  2. Information Entropy • Information entropy: Claude E. Shannon 1948, “A Mathematical Theory of Communication” • The information contained in a statement asserting the occurrence of an event depends on the probability p(f), of occurrence of the event f. - lg p(f) • The unit of the above information quantity is referred as a bit, since it is the amount of information carried by one (equally likely) binary digit. • Entropy H is a measure of uncertainty or information content • - Very uncertain  high information content VLC 2008 PART 1

  3. Entropy Rate • Conditional entropy H(F|G) between F and G: uncertainty of F given G • Nth order entropy • Mth order conditional entropy • Entropy rate (lossless coding bound) VLC 2008 PART 1

  4. Bound for Lossless Coding • Scalar coding: could differ from the entropy by up to 1 bit/symbol • Vector (block) coding: assign one codeword for each group of N symbols • Conditional coding (predictive coding, context-based coding): The codeword of the current symbol depends on the pattern (context) formed by the previous M symbol VLC 2008 PART 1

  5. a1 = 0.5 0 a2 = 0.25 0 1 a3 = 0.25 1 a1 = 0.6a2= 0.2a3 = 0.125a4 = 0.075 01 01 01 Huffman Coding • Huffman coding for pdf: (a1, a2, a3) = (0.5, 0.25, 0.25) • –lg0.5 = 1, –lg 0.25 = 2 • a1 = 0, a2 = 10, a3 = 11 • If the self information is not integer? • pdf: (a1, a2, a3, a4) = (0.6, 0.2, 0.125, 0.075) • –lg 0.6 = 0.737, –lg 0.2 = 2.32,–lg 0.125 = 3, –lg 0.075 = 3.74 • a1= 0, a2 = 10, a3 = 110, a4 = 111 H1=1.5617 R1=1.6 VLC 2008 PART 1

  6. Huffman vs. Arithmetic Coding • Huffman coding: convert a fixed number of symbols into a variable length codeword • Efficiency • The usage of fixed VLC tables does not allow an adaptation to the actual symbol statistics. • Arithmetic Coding: convert a variable number of symbols into a variable length codeword • Efficiency • Process one symbol at a time • Easy to adapt to changes in source statistics • Integer implementation is available VLC 2008 PART 1

  7. Arithmetic Coding • The bits allocated for each symbol can be non-integer • If pdf(a) = 0.6, then the bits to encode ‘a’ is 0.737 • For the optimal pdf, the coding efficiency is always better than or equal to the Huffman coding • Huffman coding for a2 a1 a4 a1 a1 a3, total 11 bits: • Arithmetic coding for a2 a1 a4 a1 a1 a3, total 11.271 bits: 2 + 1 + 3 + 1 + 1 + 3 2.32 +0.737+ 3.74 +0.737+0.737+ 3 The exact probs are preserved VLC 2008 PART 1

  8. Arithmetic Coding • Basic idea: – Represent a sequence of symbols by an interval with length equal to its probability – The interval is specified by its lower boundary (l), upper boundary (u) and lengthd(= probability) – The codeword for the sequence is the common bits in binary representations of l and u • The interval is calculated sequentially starting from the first symbol – The initial interval is determined by the first symbol – The next interval is a subinterval of the previous one, determined by the next symbol VLC 2008 PART 1

  9. An Example Any binary value between l and u can unambiguously specify the input message. ½=(10…)=(01…1…) ¼ =(010…)=(001…1…) d(ab)=1/8 VLC 2008 PART 1

  10. Why CABAC? • The first standard that uses arithmetic entropy coder is given by Annex E of H.263 • Drawbacks: • Annex E is applied to the same syntax elements as the VLC elements of H.263 • All the probability models are non-adaptive that their underlying probability as assumed to be static. • The generic m-ary arithmetic coder used involves a considerable amount of computational complexity. VLC 2008 PART 1

  11. CABAC: Technical Overview update probability estimation Context modeling Probability estimation Coding engine Binarization Adaptive binary arithmetic coder Uses the provided model for the actual encodingand updates the model Maps non-binary symbols to a binary sequence Chooses a model conditioned on past observations VLC 2008 PART 1

  12. CABAC VLC 2008 PART 1

  13. Context-based Adaptive Binary Arithmetic Code(CABAC) • Usage of adaptive probability models • Exploiting symbol correlations by using contexts • Non-integernumber of bits per symbol by using arithmetic codes • Restriction to binary arithmetic coding • Simple and fast adaptation mechanism • But: Binarization is needed for non-binary symbols • Binarization enables partitioning of state space VLC 2008 PART 1

  14. Implementation of Arithmetic Coding • Rescaling andIncremental coding • Integer arithmetic coding • Binary arithmetic coding • Hoffman Trees • Exp-Golomb Codes VLC 2008 PART 1

  15. Issues • Finite precision (underflow & overflow): As n gets larger, these two values, l(n) and u(n) come closer and closer together. This means that in order to represent all the subintervals uniquely we need to increase the precision as the length of the sequence increases. • Incremental transmission: transmit portions of the code as the sequence is being observed. • Integer implementation VLC 2008 PART 1

  16. Rescaling & Incremental Coding VLC 2008 PART 1

  17. Incremental Encoding U U L L L U VLC 2008 PART 1

  18. Question for Decoding • How do we start decoding? decode the first symbol unambiguously • How do we continue decoding? mimic the encoder • How do we stop decoding? VLC 2008 PART 1

  19. Incremental Decoding = 0.312+(0.6-0.312)0.8 = 0.312+(0.6-0.312)0.82 U 1.0 Top 18% of[0,0.8) 0.82 0.8 VLC 2008 PART 1

  20. Issues in the Incremental Coding VLC 2008 PART 1

  21. Solution VLC 2008 PART 1

  22. Solution (2) VLC 2008 PART 1

  23. Incremental Encoding VLC 2008 PART 1

  24. Incremental Decoding VLC 2008 PART 1

  25. Integer Implementation -1 VLC 2008 PART 1

  26. Integer Implementation • nj: the # of times the symbol j occurs in a sequence of length Total Count. • FX(k) can be estimated by • Define we have • E3: if (E3 holds) Shift l to the left by 1 and shift 0 into LSB Shift u to the left by 1 and shift 0 into LSB Complement (new) MSB of l and u Increment Scale3 VLC 2008 PART 1

  27. Golomb Codes • Golomb-Rice code: a family of codes designed to encode integers with the assumption that the larger an integer, the lower its probability of occurrence. • An example (the simplest, unary code): for integer n, codes as n 1s followed by a 0. This code is the same as the Huffman code for {1, 2, …} with probability model • Golomb code with m: code n > 0 using two numbers q and r: • Q is coded by unary code of q; r is represented by binary code using bits. • the first – m values, uses bits • the rest values: use + 1bits • Golomb code for m = 5: 3+3=110 VLC 2008 PART 1

  28. Golomb Codes • Golomb code is optimal for the probability model • exp-Golomb code: variable length codes with regular construction: [m zeros] [1] [info]. code_num: index info: is an m-bit field carrying information • Mapping types: ue, te, se, and me • designed to produce short codewords for frequently-occurring values and longer codewords for less common parameter values. VLC 2008 PART 1

  29. exp-Golomb Codes • exp-Golomb code • Decode: 1. Read in m leading zeros followed by 1. • 2. Read m -bit info field. • 3. the group size increases exponentially ﹝m zeros﹞﹝1﹞﹝info﹞ 21-2 22-2 23-2 VLC 2008 PART 1

  30. exp-Golomb Entropy Coding A parameter k to be encoded is mapped to code_num in one of the following ways: VLC 2008 PART 1

  31. exp-Golomb Entropy Coding VLC 2008 PART 1

  32. H.264 Coding Parameters VLC 2008 PART 1

  33. Uniqueness and Efficiency of AC(1) -log2p(x) VLC 2008 PART 1

  34. Uniqueness and Efficiency of AC(2) VLC 2008 PART 1

  35. Uniqueness and Efficiency of AC(3) VLC 2008 PART 1

  36. Uniqueness and Efficiency of AC(4) VLC 2008 PART 1

More Related