1 / 28

Transformation

Transformation. Last update on June 15, 2010 Doug Young Suh suh@khu.ac.kr. Entropy and compression. amount of information = degree of surprise Entropy and average code length Information source and coding Memoryless source : no correlation. ∙∙∙∙∙. ∙∙∙.

judson
Télécharger la présentation

Transformation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transformation Last update on June 15, 2010 Doug Young Suh suh@khu.ac.kr

  2. Entropy and compression • amount of information = degree of surprise • Entropy and average code length • Information source and coding • Memoryless source : no correlation ∙∙∙∙∙ ∙∙∙ Red blue yellow yellow red black red ∙∙∙ 00011010001100 Media Lab. Kyung Hee University

  3. Entropy • Entropy • What if {0.99,0.003,0.003,0.004}? H(X)≈0 • Reduce H(X)!! • We need narrower pdf. 1/4 1 2 3 4 1/2 1/4 1/8 1 2 3 4 Media Lab. Kyung Hee University

  4. How to getsmall H(X)? • Transformation • Signals in the frequency domain • or mapping into more probable vectors • Predict to reduce uncertainty • H(X) = “degree of uncertainty” • Prediction by using known information Media Lab. Kyung Hee University

  5. Setof ortho-normal vectors • Inner product of two orthogonal vectors is 0. • Inner product • Add multiplications at the same positions. Normal Normal Media Lab. Kyung Hee University

  6. Mapping to { , } • Any vector in the 2 dimensional space can be represented by weighted sumof 2 ortho-normal vectors. • In this case, c0 is stronger. • What does it mean “a weight is large”? weights Media Lab. Kyung Hee University

  7. Howto get the weights? • Inner product for weight • More generally, Media Lab. Kyung Hee University

  8. 2 point DCT/IDCT • 2 point DCT/IDCT • DCT • IDCT Media Lab. Kyung Hee University

  9. 4-pt transform • 4 basis vectors (i.e. code) possible C1 =[½ ½ ½ ½ ] C2 =[½ ½ -½ -½ ] C3 =[½ -½ -½ ½ ] C4 =[½ -½ ½ -½ ] • Matrix representation Networked Video

  10. 8-pt DCT with 8D vectors • Any set of 8 data can be represented by weighted sum of 8 ortho-normal vectors • 8 weights for 8 ortho-normal vectors in the 8 dimensional spaces. • Frequency 0  DC, constant (not varying) Media Lab. Kyung Hee University

  11. 8X8D ortho-normal vectors is calculated for u=1 and x=0~7 (cosine 1/2 period) is calculated for u=2 and x=0~7 (cosine 1 period) is calculated for u=3 and x=0~7 (cosine 1.5 period) is calculated for u=7 and x=0~7 (cosine 3.5 period) Media Lab. Kyung Hee University

  12. 8 8D ortho-normal vectors DC frequency 0 u=0 From low frequency u=1 u=2 u=3 To high frequency n=0  n=3  n=7 Media Lab. Kyung Hee University

  13. 8 8D ortho-normal vectors From low frequency u=4 u=5 u=6 u=7 To high frequency n=0  n=3  n=7 Media Lab. Kyung Hee University

  14. 8-pt DCT/IDCT • IDCT • DCT Media Lab. Kyung Hee University

  15. 8-pt DCT of DC • DC : constant signal 255 255 255 255 255 255 255 255 722 0 0 0 0 0 0 0 Media Lab. Kyung Hee University

  16. 8-pt DCT of stepsignal • Step signal 255 255 255 255 0 0 0 0 360 326 0 -114 0 77 0 65 Media Lab. Kyung Hee University

  17. 8-pt DCT of stepsignal • High frequency signal 255 0 255 0 255 0 255 0 360 65 0 77 0 114 0 327 Media Lab. Kyung Hee University

  18. 8x8 DCT • 2D DCT of the 8x8 block, 64 pixels • 8 point DCT for 8 rows of 8 pixels  8 point DCT for 8 columns • 8x8 DCT • 8x8 IDCT Media Lab. Kyung Hee University

  19. 8x8 DCT • 8 pt DCT is to calculate weights for81Dbasic patterns, • while 8x8 2D DCT is to calculate weights for642Dbasic patterns. • For example,

  20. 8x8 DCT • 64 8x8 patterns U=0 U=7 v=0 v=7

  21. 2D 8x8 DCT • By using 8X8 DCT, 64weights are calculated and stored, respectively. • These are 2D 8x8 DCT coefficients. Media Lab. Kyung Hee University

  22. 2D 8x8 DCT at horizontal edge • F00andF01are large. • For u>0, they are almost 0. DCT Block with Horizontal edge DCT of Horizontal edge Media Lab. Kyung Hee University

  23. 2D 8x8 DCT at verticaledge • F00andF10are large. • For v>0, the values aresmall. • F10< 0 ? DCT Block with vertical edge DCT of vertical edge Media Lab. Kyung Hee University

  24. Low pass filtering (LPF) • Remove weights of higher frequency LPF 0 Media Lab. Kyung Hee University

  25. Energy compaction • More compression when energy distribution is focused to a direction. • The simpler an image, the more compression. • DCT is better thanDFT for image.  KL transform! • Ortho-normal vector patterns of DCT are better suited to image. Simple image Complex image DCT DFT 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Media Lab. Kyung Hee University

  26. Matrix representation • Matrix representation of 4x4-pt transform where , then • Complexity ~ O(N3)  needs fast algorithm Networked Video

  27. Summary • Transform for compression • Energy compaction • transform = inner product to ortho-normal vectors • weighted sum • Weights = frequency coefficients • No information loss at all !! Media Lab. Kyung Hee University

  28. Video encoding 2-1 2-3 2-2 Original Video + DCT Q VLC Encoded Bitstream IQ Motion vector Motion Estimation Frame Memory IDCT Networked Video

More Related