1 / 25

A Robust Main-Memory Compression Scheme

A Robust Main-Memory Compression Scheme. Magnus Ekman and Per Stenström Chalmers University of Technology Göteborg, Sweden. Motivation. Memory resources are wasted to compensate for the increasing processor/memory/disk speedgap >50% of die size occupied by caches

rocio
Télécharger la présentation

A Robust Main-Memory Compression Scheme

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Robust Main-Memory Compression Scheme Magnus Ekman and Per Stenström Chalmers University of Technology Göteborg, Sweden

  2. Motivation • Memory resources are wasted to compensate for the increasing processor/memory/disk speedgap >50% of die size occupied by caches >50% of cost of a server is DRAM (and increasing) • Lossless data compression techniques have the potential to free up more than 50% of memory resources. Unfortunately, compression introduces several challenging design and performance issues

  3. core core L1-cache L1-cache L2-cache Data Request 0 64 128 192 256 320 384 448 Cache miss address 256 Main memory space

  4. core core L1-cache L1-cache Data L2-cache Decompressor 0 64 128 192 256 320 384 448 Address Request Fragmented compressed main memory space Translation Data 0 64 128 192 256 320 384 448 Translation table Compressed main memory space Cache miss address 256

  5. Contributions A low-overhead main-memory compression scheme: • Low decompression latency by using simple and fast algorithm (zero aware) • Fast address translation by a proposed small translation structure that fits on the processor die • Reduction of fragmentation through occassional relocation of data when compressibility varies Overall, our compression scheme frees up 30% of the memory at a marginal performance loss of 0.2%!

  6. Outline • Motivation • Issues • Contributions • Effectiveness of Zero-Aware Compressors • Our Compression Scheme • Performance Results • Related Work • Conclusions

  7. Frequency of zero-valued locations • 12% of all 8KB pages only contain zeros • 30% of all 64B blocks only contain zeros • 42% of all 4B words only contain zeros • 55% of all bytes are zero! Zero-aware compression schemes have a great potential!

  8. Evaluated Algorithms Zero aware algorithms: • FPC (Alameldeen and Wood) + 3 simplified versions For comparison, we also consider: • X-Match Pro (efficient hardware implementations exist) • LZSS (popular algorithm, previously used by IBM for memory compression) • Deflate (upper bound on compressibility)

  9. Resulting Compressed Sizes Main observations: • FPC and all its variations can free up about 45% of memory • LZSS and X-MatchPro only marginally better in spite of complexity • Deflate can free up about 80% of memory but not clear how to exploit it Fast and efficient compression algorithms exist! SpecInt SpecFP Server

  10. Outline • Motivation • Issues • Contributions • Effectiveness of Zero-Aware Compressors • Our Compression Scheme • Performance Results • Related Work • Conclusions

  11. Uncompressed data TLB Block Size Table (BST) Compressed data Compressed fragmented data Address Calculator 01 00 11 10 00 01 10 11 01 00 11 10 00 01 10 11 11 10 01 00 00 00 11 11 00 01 00 10 10 00 10 01 01 10 10 10 10 11 10 00 Block size vector Address translation • OS changes • Block size vector is kept in page table • Each page is assigned one out of k predefinedsizes. Physical address grows with log2k bits. A block is assigned one out of n predefined sizes. In this example n=4. The Block Size Table enables fast translation!

  12. Size changes and compaction slack Block overflow Block underflow sub-page slack page slack sub-page slack Sub-page 1 Sub-page 0 Terminology: block overflow/underflow, sub-page overflow/underflow, page overflow/underflow

  13. Handling of overflows/underflows • Block and sub-page overflows/underflows implies moving data within a page • On a page overflow/underflow the entire page needs to be moved to avoid having to move several pages • Block and sub-page overflows/underflows are handled in hardware by an off-chip DMA-engine • On a page overflow/underflow a trap is taken and the mapping for the page is changed Processor has to stall if it accesses data that is being moved!

  14. core core L1-cache L1-cache Putting it all together BST L2-cache DMA- engine Calc. Comp Dec. Sub-page 0 Sub-page 1 Page 0

  15. Experimental Methodology Key issues to experimentally evaluate: • Compressibility and impact of fragmentation • Performance losses for our approach Simulation approaches (both using Simics): • Fast functional simulator (in-order, 1 instr/cycle) allowing entire benchmarks to be run • Detailed microarchitecture simulator driven by a single sim-point per application

  16. Loads to a block onlycontaining zeros can retire without accessing memory! Architectural Parameters 1Aggressive for future processsors to not give advantage to our compr. scheme 2Conservatively assumes 200 MHz DDR2 DRAM; leads to long lock-out time

  17. Benchmarks SPEC2000 ran with reference set. SAP and SpecJBB ran 4 billion instructions per thread

  18. Overflows/Underflows Note: y axis is logarithmic Main observations: • About 1 out of every thousand instruction causes a block overflow/underflow • The use of subpages cuts down the number of page-level relocations by one order of magnitude • Memory bandwidth goes up by 58% with subpages and 212% without. Note that this is not the bandwidth to the processor chip Fragmentation with the use of a hierarchy of pages, sub-pages and blocks, reduce memory savings to 30%

  19. Detailed Performance Results We used a single simpoint per application according to [Sherwood et al. 2002] Main observations: • Decompression latency reduces performance by 0.5% on average • Misses to zero-valued blocks increases performance by 0.5% on average • Factoring in also relocation/compaction losses, performance losses are only 0.2%!

  20. Related Work Early work on main memory compression: • Douglis [1993], Kjelso et al. [1999], and Wilson et al. [1999] • These works aimed at reducing paging overhead so the significant decompression and address translation latencies were offset by the wins More recently • IBM MXT [Abali et al. 2001] • Compresses entire memory with LZSS (64 cycle decompression latency) • Translation through memory resident translation table • Shields latency by huge (at that point in time) 32-MByte cache. Sensitive to working set size Compression algorithm • Inspired by frequent-value locality work by Zhang, Yang and Gupta 2000 • Compression algorithm from Alameldeen and Wood, 2004

  21. Concluding Remarks It is possible to free up significant amounts of memory resources with virtually zero performance overhead This was achieved by • exploiting zero-valued bytes which account for as much as and 55% of the memory contents • leveraging a fast compression/decompression scheme • a fast translation mechanism • a hierarchical memory layout which offers some slack at the block, subpage, and page level Overall, 30% of memory could be freed up at a loss of 0.2% on average

  22. Backup Slides

  23. Fragmentation - Results

  24. % misses to zero-valued blocks • For gap, gcc, gzip, mesa more than 20% of the misses request zero-valued blocks; for the rest the percentage is quite small

  25. Zero run 3 bits (for runs up to 8 ”0”) Frequent Pattern Compression (Alameldeen and Wood) Each 32-bit word is coded using a prefix plus data:

More Related