1 / 9

Lossless Compression Based on the Sequence Memoizer

Lossless Compression Based on the Sequence Memoizer. Jan Gasthausy , Frank Wood, Yee Whye Teh Presented by Yingjian Wang Jan. 11, 2011. Outline. Background The sequence memoizer The PLUMP Experiment results. Background.

tia
Télécharger la présentation

Lossless Compression Based on the Sequence Memoizer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lossless Compression Based on the Sequence Memoizer Jan Gasthausy, Frank Wood, Yee WhyeTeh Presented by Yingjian Wang Jan. 11, 2011

  2. Outline • Background • The sequence memoizer • The PLUMP • Experiment results

  3. Background 2006,Teh, ‘A hierarchical Bayesian language model based on Pitman-Yor processes’ : N-gram Markov chain language model with the HPY prior. 2009, Wood, ‘A Stochastic Memoizer for Sequence Data’: The Sequence Memoizer (SM) with linear space/time inference scheme. (lossless) 2010, Gasthaus, ‘Lossless compression based on the Sequence Memoizer’: Combine the SM with an arithmetic coder to develop a compressor (dePLUMP); and an incremental inference algorithm. 2010, Gasthaus, ‘Improvements to the sequence memoizer’: an enlarged range of hyperparameters, a memory efficient representation, and inference algorithms for the new representation. 2010, Bartlett, ‘Forgetting Counts : Constant Memory Inference for a Dependent HPY’: Develop a constant memory/space inference for the SM, by using a dependent HPY. (with loss)

  4. Sequence memoizer • Non-Markov (Unbounded depth Markov) model: • Hierarchical Pitman-Yor prior:

  5. Sequence memoizer (2) Achieve linear time/space inference by collapsing the non-branching restaurants. # of restaurants: O(T^2)O(T)

  6. Plump (Power Law Unbounded Markov Prediction www.deplump.com) • The first contribution: Word: Byte; Vocabulary size: 2^8=256; Coding: entropy coding (the average number of bits per symbol)

  7. Plump (2) • The second contribution: New inference method: 1PF. Since the MCMC is computationally intensive, for i-th iteration, only update the restaurants associated with the i-th observation. This corresponds to the sequential MC with particle filter with 1 particle. • Two justifications: First, the posterior is unimodal, so it is easy for the particle filter to obtain a sample close to the mode and this single sample is likely to perform well. Secondly, much of the predictive ability of the SM comes from the hierarchical sharing of counts which is already present in a single sample.

  8. Experiment results Calgary corpus: A well known compression benchmark corpus consisting of 14 files of different types and varying lengths.

  9. Experiment results (2) • With larger vocabulary: Chinese version bible. • Vocabulary size: 2^16; 16-bit characters to compress Big-5 encoded Chinese text. • Average bits per symbol: 4.35 bits per Chinese Character; better than the 5.44 bits of PPMZ.

More Related