1 / 37

The impossible patent: an introduction to lossless data compression

The impossible patent: an introduction to lossless data compression. Carlo Mazza. Plan. Introduction Formalization Theorem A couple of good ideas. Introduction. What is data compression? .

Télécharger la présentation

The impossible patent: an introduction to lossless data compression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The impossible patent: an introduction to lossless data compression Carlo Mazza

  2. Plan • Introduction • Formalization • Theorem • A couple of good ideas

  3. Introduction

  4. What is data compression? Data compression is the procedure that reduces the size of information. It is used today in many applications, expecially in digital data: • generic files compression (ZIP, RAR, etc.) • audio compression (MP3, AAC, FLAC, etc.) • images compression (JPG, GIF, PNG, etc.) • video compression (AVI, MP4, WMV, etc.)

  5. (Very) Brief historic overview 1838: Morse Code 1940: Information Theory (Shannon, Fano, Huffman)

  6. Screenshot of PKZIP 2.04g, created on February 15, 2007 using DOSBox

  7. Different kinds of compression • Lossless compression: ZIP, RAR, FLAC, PNG • Lossy compression: MP3, JPG, MP4, AAC

  8. Formalization

  9. Lossless compression The lossless compression is the compression which does not lose information, i.e., there is another operation, decompression, such that compressing and decompressing a file gives back the exact same file.

  10. No loss of information • Messaggi SMS: "hi m8, r u k? sry i 4gt 2 cal u lst nite. why dnt we go c movie 2nite? c u l8r" "c 6? xke nn ho bekkato ness1 in 3no? cmq c vdm + trd nel pom" • Aoccdrnig to a rscheearch at an Elingsh uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht frist and lsat ltteer is at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae we do not raed ervey lteter by itslef but the wrod as a wlohe.

  11. Loss of information Jane S., a chief sub editor and editor, can always be found hard at work in her cubicle. Jane works independently, without wasting company time talking to colleagues. She never thinks twice about assisting fellow employees, and she always finishes given assignments on time. Often Jane takes extended measures to complete her work, sometimes skipping coffee breaks. She is a dedicated individual who has absolutely no vanity in spite of her high accomplishments and profound knowledge in her field. I firmly believe that Jane can be classed as a high-caliber employee, the type which cannot be dispensed with. Consequently, I duly recommend that Jane be promoted to executive management, and a proposal will be sent away as soon as possible.

  12. Formalization We try to formalize the situation: • let F be a file, a sequence of ones and zeros • let L(F) be the length of the file F • we want to find a procedure that from F yields another file G in such a way that L(G)≤L(F) How many are the files of length N? And those of length at most N?

  13. Compression as a function We think of compression as a function f from the set of files into the same set of files such that L(f(F))≤L(F). What properties do we need from this function for the compression to be lossless? • the function f(F)=0 surely compresses but loses information • the function f(F)=F surely does not loses information but does not compress either What is the property that distinguishes lossless and lossy compression?

  14. Compression as a function As we said before, we say that the compression is lossless if there is another operation which recovers, the original file. The functions f models lossless compression if there is another fuction g such that for every file F we have g(f(F))=F (f o g)(F)=F (f o g)(F)=(id)(F) We say that f has a left inverse

  15. Left inverses and injective maps Theorem: A function f admits a left inverse if and only if it is injective. Proof: Say f is a map from X to Y. Suppose f is injective. Then every y is the image of at most one x in X. We define the map g by stating that every y which is hit goes back to x, and every other y can do whatever it wants. It is clear that for every x in X, g(f(x))=x.

  16. Left inverses and injective maps Proof (cont’d): Suppose now that f admits a left inverse, call it g. Suppose that f(x)=f(x’). Then g(f(x))=g(f(x’)), but x=g(f(x))=g(f(x’))=x’, and therefore x=x’, that is f is injective. We managed to translate an intuitive property (“losslessness”) into a precise mathematical concept (injectivity).

  17. Theorem

  18. Limits of lossless compression • WEB Technologies • Premier Research Corporation (MINC) • Hyper Space method • Matthew Burch • Pegasus Web Services Inc. (patent 7,096,360) Actually... • Theorem: There is no “perfect” lossless compression.

  19. Proof by contraddiction Theorem: There isn’t a function f such that for every F we have L(f(F))≤L(F), but there is at lest one such that L(f(F))<L(F)). Proof: Let’s suppose such a function exists • Let F be a file which is actually compressed and let G=f(F). Consider L(f(G)). • If L(f(G))=L(F) then let H=f(G)=f(f(F)) and consider L(f(H)) and so on. • Since f is injective, I cannot hit the same file twice.

  20. Proof (continued) • So the length will have to decrease eventually. • But then we will eventually go to files of length one, from where we cannot go any further, which leads to a contraddiction.

  21. Schubfachprinzip Dirichlet’s Principle (1834), pigeonhole principle • Let f be a function from a set A to a set B. If the number of elements of B is stricly less than that of A, then f is not injective.

  22. Let’s count Theorem: There isn’t a function f which compresses almost all files (i.e., L(f(F))≤L(F) for all F but there is at least one such that L(f(F))<L(F)). Proof: Let N be the minimal length of a file which is compressed. The files of length N-1 are 2(N-1) and so all files of length N are 2(N-1)+2(N-2)+...+21=2N-2. Then f sends a set of size 2(N-2)+1 to a set of size 2(N-2). But because of the pigeonhole principle, it cannot be injective.

  23. Impossible compression So there is no universal compression function. Actually, looking at the proof, it’s clear that if something is compressed, something else increases in size. So, if we have no good ideas, better leave everything as is.

  24. A couple of good ideas RLE and prefix codes

  25. Run Lenght Encoding The Run Lenght Encoding (RLE) technique is one of the oldest compression algorithm: when a symbol repeats, we substitute the symbol and the number of its repetitions. • “aaaabbbcccdd” -> “4a3b3c2d” • “mathematics”->“1m1a1t1h1e1m1a1t1i1c1s” It works badly for messages with few repetitions and very well for messages with a lot of repetitions (fax).

  26. ASCII encoding But we still need to encode the letters and frequencies in binary. In general, let’s say we have a text message that we want to compress. The output will be a binary string, so we need to convert letters to binary numbers. One of the standards is the ASCII standard that assigns to each letter a 7 bit number (a string of 7 ones or zeros, so it encodes 27=128 symbols).

  27. Dictionary Encoding We decide to choose a dictionary that need not be only one letter, but maybe more. But we still need to have some kind of fixed length to be able to separate the frequencies from the symbols.

  28. Exercise 011000010110000101100010011000110110000101100001011000100110001101100001011000010110001001100011 011000010110000101100010011000110110000101100001011000100110001101100001011000010110001001100011 01100001011000010110001001100011 01100001011000010110001001100011 01100001011000010110001001100011

  29. Reducing number of bits • encoding “mathematics” in ASCII requires 7 bits * 11 letters = 77bits • “mathematics” only has 8 different letters, so only 3 bits are needed, so in total 33 bits • but we could use less bits for the more frequent letters, i.e., a=0 m=1 t=10 h=11 e=100 i=101 c=110 s=111 so “mathematics” becomes “1010111001010101110111” (22 b) • but that also encodes “iasaattihas”

  30. Prefix codes Need to make sure that no code is the prefix of another code • a=0 b=1 c=10 doesn’t work • a=0 b=10 c=11 works Examples: • international prefix (+1 USA, +39 Italy)

  31. Huffman coding We start with a frequency table of the letters. We produce a tree following the rules: • create a tree for every letter with weight equal to its frequency • create a new tree by joining the two trees with the least two weights (and give it as weigth the sum of the two weigths) • go on until there is only one tree To see what the codes are, we read the tree from the top to the bottom.

  32. Examples • “aaaabbbccdd” • RLE “4a3b2c2d” “10000111010111011” (17 bits) • Huffman: • “mathematics” • RLE “1m1a1t1h1e1m1a1t1i1c1s” (3*11=33 bits) • Huffman:

  33. 5 5 5 2 1 1 2 2 assassins: (5,s) (2,a) (1,i) (1,n) s s s a i n a i n 4 a i n

  34. So, in the end: s=0 a=10 i=110 n=111 “assassins” = 100010001101110 (15 bits) Try “sessions”, “sassafrasses”, “mummy”, “beekeeper”, but not “mathematics” 0 1 s 0 1 a 0 1 i n

  35. Advantages and disavantages • RLE: one can start compressing at once (there is no need to read the whole message to construct a frequency table) • RLE: works expecially well when there are few symbols and lots of repetitions • Huffman: works well when the frequencies are not close to each other (natual language) • Huffman: works expecially well when frequencies are powers of two

  36. That’s all folks!

  37. (Very) Brief history of data compression • 1838: Morse code • 1940s: Information theory (Shannon, Fano, Huffman) • 1970s: LZW (Lempel, Ziv and Welch), Microsoft, Apple • 1980s: ARJ, PKZIP, LHarc (BBS and newsgroups) • 1990s: JPG, MP3 (“The web” and browsers), 1994: Yahoo 1998: Google • 2001: dot-com bubble • 2004: Facebook

More Related