1 / 55

On Lossy Compression

On Lossy Compression. Paul Vitanyi CWI, University of Amsterdam, National ICT Australia Joint work with Kolya Vereshchagin. You can import music in a variety of formats, such as MP3 or AAC, and at whatever quality level you’d prefer.  Lossy Compression

kendra
Télécharger la présentation

On Lossy Compression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Lossy Compression Paul Vitanyi CWI, University of Amsterdam, National ICT Australia Joint work with Kolya Vereshchagin

  2. You can import music in a variety of formats, such as MP3 or AAC, and at whatever quality level you’d prefer.  Lossy Compression • You can even choose the new Apple Lossless encoder. Music encoded with that option offers sound quality indistinguishable from the original CDs at about half the file size of the original.  Lossless Compression

  3. Lossy Compression drives the Web • Pictures: JPEG • Sound: MP3 • Video: MPEG • Majority of Web transfers are Lossy Compressed Data--- http traffic was exceeded by peer-to-peer music and video sharing in 2002.

  4. Lena Compressed by JPEG Original Lena Image (256 x 256 Pixels, 24-Bit RGB) JPEG Compressed (Compression Ratio 43:1) JPEG2000 Compressed (Compression Ratio 43:1) As can be seen from the comparison images above, at compression ratios above 40:1 the JPEG algorithm begins to lose its effectiveness, while the JPEG2000 compressed image shows very little distortion.

  5. Rate Distortion Theory Underlies Lossy Compression Claude Elwood Shannon, 1948 & 1959, Defines Rate Distortion (With learning mouse “Theseus” in the picture)

  6. Rate Distortion • X is a set of source words • Y is a set of code words If |X| < |Y|, then no code is faithfull  Distortion

  7. Distortion Choose a distortion measure d: X × Y  Real Numbers Y Code words X Source words Distortion = d(x,y) y x coding Distortion = fidelity of the coded version versus the source data.

  8. Example Distortion Measures • List Distortion for bit rate R: x  • Hamming Distortion for bit rate R : x = y = x Source word x is a finite binary string; Code word y is a finite set of source words containing x, and y is described in ≤R bits. coding y Distortion d(x,y) = log |y| (rounded up to integer value) Source word x and code word y are binary strings of length n. coding Bit flips y can be described in ≤R bits Distortion d(x,y) = number of flipped bits between x and y.

  9. Example Distortion Measures • Euclidean Distortion for parameter R : X is a real number Distortion d(x,y) = |x-y| 0 1 coding 0 1 y is a rational number that can be described in ≤R bits

  10. Distortion-rate function Minimal distortion as function of given rate R: x_1 x_2 x_n Random source: Coding using a sequence of codes c_1.c_2,...,c_n from prescribed code class y_1 = c_1(x_1) y_2 = c_2(x_2) y_n = c_n(x_n) Code length | y_1 y_2 ... y_n | ≤ nR bits Distortion-rate function: D(R)= lim min ∑ p(x_1x_2...x_n) 1/n ∑ d(x_i,y_i) n i=1 x_1x_2...x_n n∞ over all code sequences c_1,c_2,...,c_n satisfying rate constraints

  11. Rate-distortion function Minimal rate as function of maximal allowed distortion D: x_1 x_2 x_n Random source: Coding using a sequence of codes c_1.c_2,...,c_n from prescribed code class y_1 = c_1(x_1) y_2 = c_2(x_2) y_n = c_n(x_n) Expected distortion ∑ p(x_1x_2...x_n) 1/n ∑ d(x_i,y_i)≤ D x_1x_2...x_n Rate-distortion function: R(D)= lim min ∑ p(x_1x_2...x_n) 1/n ∑ |y_i| n i=1 x_1x_2...x_n n∞ over all code sequences c_1,c_2,...,c_n satisfying distortion constraints Since D(R) is convex and nonincreasing, R(D) is its inverse.

  12. Function graphs Rate-distortion graph List distortion: R(D) = n – D |x_i|=n, D = expected log-cardinality of list (set). Rate-distortion graph Hamming distortion: R(D)=n(1-H(D/n)) |x_i|=n, D= expected # bit flips Rate-distortion graph Euclidean distortion: R(D)= log 1/D x_i is a real in [0,1], D = expected distance between x_i and rational code word y_i

  13. Problems with this approach • Functions give expectations or at best rate-distortion relation for a high-probability set of typical sequences • It is often assumed that the random source is stationary ergodic (to be able to determine the curve) • This is fine for data that satisfy simple statistical properties, • But not for complex data that satisfy global relations like images, music, video • Such complex pictures are usually atypical. • Just like lossless compression requires lots of tricks to be able to compress meaningful data, so does lossy compression. • There is a wealth of ad hoc theories and solutions for special application fields and problems. Can we find a general theory for lossy compression of individual data?

  14. Andrey Nikolaevich Kolmogorov(1903-1987, Tambov, Russia) • Measure Theory • Probability • Analysis • Intuitionistic Logic • Cohomology • Dynamical Systems • Hydrodynamics • Kolmogorov complexity

  15. Background Kolmogorov complexity: Randomness of individual objects. First: A story of Dr. Samuel Johnson … Dr. Beattie observed, as something remarkable which had happened to him, that he chanced to see both No.1 and No.1000 hackney-coaches. “Why sir,” said Johnson “there is an equal chance for one’s seeing those two numbers as any other two.” Boswell’s Life of Johnson

  16. Defining Randomness: Precursor Ideas • Von Mises: A sequence is random if it has about same # of 1’s and 0’s, and this holds for its ‘reasonably’selected subsequences. • P. LaPlace: A sequence is “extraordinary” (nonrandom) because it contains rare “regularity”. • But what is “reasonable”? • A. Wald: Countably many selection functions • A. Church: Recursive functions • J. Ville: von Mises-Wald-Church randomness no good.

  17. Kolmogorov Complexity Solomonoff (1960)-Kolmogorov (1965)-Chaitin (1969): The amount of information in a string is the size of the smallest program generating that string. Invariance Theorem: It does not matter which universal Turing machine U we choose. I.e. all “encoding methods” are ok.

  18. Kolmogorov complexity • K(x)= length of shortest description of x • K(x|y)=length of shortest description of x given y. • A string is random if K(x) ≥ |x|. • K(x)-K(x|y) is information y knows about x. • Theorem (Mutual Information). K(x)-K(x|y) = K(y)-K(y|x)

  19. Applications of Kolmogorov complexity • Mathematics --- probability theory, logic. • Physics --- chaos, thermodynamics. • Computer Science • Biology: complex systems • Philosophy – randomness. • Information theory – Today’s topic.

  20. Individual Rate-Distortion Given datum x, class of models Y={y}, distortion d(x , y): • Rate-distortion function: r (d) = min {K(y): d(x,y) ≤ d} • Distortion-rate function: d (r) = min {d(x,y): K(y) ≤ r} x y x y

  21. Individual characteristics: More detail, especially for meaningful (nonrandom) Data • Example list distortion: data x,y,z of length n, • with K(y) = n/2, K(x)= n/3, K(z)= n/9. • All >(1-1/n)2^n data strings u of complexity • n- log n ≤ K(u) ≤ n +O(log n) have individual rate-distortion • curves approximately coinciding with Shannon’s single curve . • Therefore, the expected individual rate-distortion • curve coincides with Shannon’s single curve (up • to small error). • Those data are typical data that are essentially ‘random’ (noise) • and have no meaning. • Data with meaning we may be interested in, music, text, picture, • are extraordinary (rare) and have regularities expressing that meaning, • And hence small Kolmogorov complexity, and rate-distortion curves • differing in size and shape from Shannon’s.

  22. Upper bound Rate-Distortion graph • For all data x the rate-distortion function is monotonic non-increasing and: • r (d ) ≤ K(y ) • r (d) ≤ r (d’)+ log [α B(d’)/B(d)] + O(small) [all d ≤ d’ ] Ball of all data x within distortion d of code word (`center’) y. We often don’t write the center if it is understood Cardinality ball is B_y(d) = |{x: d(x,y) ≤d}| max x 0 Set of source words X is a ball of radius d_max with centery_0 x x This means the funxtion r_x(d)+log B(d) is monotonic non-increasing up to fluctuations of size O(log α). For all d ≤ d’ such that B(d)>0, every ball of radius d’ in X can be covered by at most α B(d’)/B(d) balls of radius d Ball of radius d’ B(d) Covering by balls of radius d ≤ d’ For this to be usefull, we require that α be polynomial inn—the number of bits in data x. This is satisfied for many distortion measures B(d’) B(d) B(d)

  23. Lower Bound Rate-Distortion Graph • r (d) ≥ K(x) – log B(d) + O(small) If we have the center of the ball in r (d) bits, together with value d in O(log d) bits, then we can enumerate all B(d) elements and give the index of x in log B(d) bits. x x

  24. Rate-distortion functions of every shape Lemma:Let r(d)+log B(d) be monotonic non-decreasing, and r(d_max) =0. Then there is datum x such that |r(d)-r_x(d)|≤ O(small) That is, for very code and distortion, every function between lower bound and upper bound is realized by some datum x (up to some small error and provided the function decreases at at least the proper slope)

  25. Hamming Distortion Lemma: For n-bit strings, α = O(n^4) B(d) D is Hamming distance, and radius d=D/n. There is a cover of a ball of Hamming radius d’ with O(n^4) B(d’)/B(d)) balls of Hamming radius d, for every d ≤ d’. B(d’) New result (as far as we know) of sparse covering large Hamming balls by small Hamming balls. Lemma: i) B(d) = n H(d)+O(log n) with d = D/n ≤ ½and H(d)= d log1/d + (1-d) log (1-d); ii) d_max = ½ with D = n/2. Every string is within n/2 bit flips of either center 00...0 or center 11...1

  26. Hamming Distortion, Continued r_x (d): rate n Upper bound: n(1-H(d)) Every monotonic non-increasing function r(d), with r(d)+log B(d) is monotonic non-decreasing, and r(½ )=0, That is, in between the lower- and upper bounds and descending at at least the proper slope, can be realized as rate-distortion function of some datum x, with precision |r(d)-r_x(d)| ≤ O(√n log n)+K(r) At K(x) rate we can Describe data x Perfectly: no distortion: D=D/n=0 Minumum sufficient statistic K(x) Actual curve: r_x(d) Lower bound: K(x)-nH(d) log n ½ With distortion d=n/D = ½ we only need to specify number of bits of data x In O(log n) bits d = D/n: distortion

  27. Theory to practice, using real compressors—with Steven de Rooij Rate Distortion 16.076816 20.000000 0000100100110011000111 16.491853 19.000000 00000100000100010100100 16.813781 18.000000 000001001100100010000101 17.813781 17.000000 0101010010001010100101000 18.076816 16.000000 00101101110111001111011101 18.299208 15.000000 001011011101110011101011100 19.299208 14.000000 0101010010001001010011010010 19.884171 13.000000 00001010010101010010100010101 20.299208 12.000000 001011010010101010010101010100 20.621136 11.000000 0010100100010101010010100010101 21.621136 10.000000 01010100100010101010010101010100 22.106563 9.000000 0010110110011010110100110110110101 23.106563 8.000000 01010110110011010110100110110110101 24.106563 7.000000 1101011001010101011010101010111010101 24.691525 6.000000 110101101010100101011010101010111010101 26.621136 5.000000 010101001010001010010101010101001010101 29.206099 4.000000 010101001010001011010100101010101110101 32.469133 3.000000 0101010010101010010101010101101010111110101 33.884171 2.000000 0101010110100010101010010101010111110101 38.130911 1.000000 01010100101000101010101010101010111110101 42.697952 0.000000 010101010101000101010101010101010111110101 Distortion Datum x Rate

  28. Mouse: Original Picture

  29. Mouse: Increasing Rate of Codes

  30. Mouse: MDL code-length

  31. Penguin: Original (Linux)

  32. Penguin: Rate of code-lengths

  33. Euclidean Distortion Lemma: d=|x-y| (Euclidean distance between real data x and rational code y) α = 2; d_max = ½; r_x(½) =O(1); r_x(d) ≤ r_x(d’)+log d’/d [all 0<d ≤d’ ≤½] Every non-increasing function r(d), such that r(d)+log d is monotonic non-decreasing, and r(½ )=0, can be realized as rate-distortion function of some real x, with precision |r(d)-r_x(d)| ≤ O(√log 1/d) [all 0<d≤½]

  34. List Distortion Lemma: d=|y|-- the cardinality of finite set y (the code) containing x with length |x|=n. α = 2; d_max = 2^n; r_x(2^n) =O(log n); r_x(d) ≤ r_x(d’)+log d’/d +O(small) [all 0<d ≤d’ ≤2^n] Every non-increasing function r(d), such that r(d)+log d is monotonic non-decreasing, and r(2^n )=0, can be realized as rate-distortion function of some string x of length n, with precision |r(d)-r_x(d)| ≤ O(log n +K(r)) [all 1<d≤2^n]

  35. List distortion continued: Distortion-rate graph Distortion-rate graph d_x(r) log |y| Lower bound d_x(r)=K(x)-r r

  36. List distortion continued: Positive and negative randomness |x|=|x’| K(x)=K(x’) d_x(r) log |y| d_x’(r) K(x)=K(x’) x r X’

  37. List distortion continued: Precision of following given function d(r) d(r) Distortion log |y| d_x(r) d Rate r

  38. Expected individual rate-distortion equals Shannon’s rate-distortion Lemma: Given m repetitions of an i.i.d. random variable with probability f(x) of obtaining outcome x, and f is a total recursive function (K(f) is finite), lim ∑ p(x^m) (1/m) d_x^m (mR)= D(R) where x^m = x_1 ... x_m, and p(.) is the extension of f to m repetitions of the random variable. m  ∞ x^m

  39. Algorithmic Statistics Paul Vitanyi CWI, University of Amsterdam, National ICT Australia Joint work with Kolya Vereshchagin

  40. Kolmogorov’s Structure function

  41. Non-Probabilistic Statistics

  42. Classic Statistics--Recalled

  43. Sufficient Statistic

  44. Sufficient Statistic, Contn’d

  45. Kolmogorov Complexity--Revisited

  46. Kolmogorov complexity and Shannon Information

  47. Randomness Deficiency

  48. Algorithmic Sufficient Statistic

  49. Maximum Likelihood Estimator,Best-Fit Estimator

  50. Minimum Description Length estimator, Relations between estimators

More Related