1 / 34

Lecture 7 Memory Hierarchy and Cache Design

Lecture 7 Memory Hierarchy and Cache Design. Pradondet Nilagupta ( Based on notes by Robert F. Hodson -- CNU) ( Based on notes by Prof. Mike Schulte). Why is memory important?.

teague
Télécharger la présentation

Lecture 7 Memory Hierarchy and Cache Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7Memory Hierarchy and Cache Design Pradondet Nilagupta ( Based on notes by Robert F. Hodson -- CNU) ( Based on notes by Prof. Mike Schulte)

  2. Why is memory important? • Processor performance has increased at a much faster rate than memory performance, making main memory the bottleneck. CPU-DRAM Gap • 1980: no cache inmproc; 1995 2-level cache, 60% trans. on Alpha 21164 mproc

  3. General Principles of Memory • Locality • Temporal Locality: referenced memory is likely to be referenced again soon (e.g. code within a loop) • Spatial Locality: memory close to referenced memory is likely to be referenced soon (e.g., data in a sequentially access array) • Locality + smaller HW is faster = memory hierarchy • Levels: each smaller, faster, more expensive/byte than level below • Inclusive: data found in top also found in the bottom • Definitions • Upper: closer to processor • Block: minimum unit that present or not in upper level • Block address: location of block in memory • Hit time: time to access upper level, including hit determination

  4. Memory Hierarchy Secondary Storage Disks DRAM Memory Hierarchy L2 Cache L1 Cache Processor Registers

  5. Differences in Memory Levels

  6. Cache: A definition • Webster’s Dictionary : • cache (kash), a safe place for hiding or storing things • In CS (originally) • the first level of memory hierarchy encounter after the CPU • In CS today (typically) • a term applied to any level of buffering employed to reuse commonly accessed items

  7. Cache Measures • Hit rate: fraction found in that level • So high that usually talk about Miss rate • Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory • Average memory-access time = Hit rate x Hit access time + Miss rate x Miss penalty • Miss penalty: time to replace a block from lower level, including time to replace in CPU • access time: time to lower level = ƒ(lower level latency) • transfer time: time to transfer block = ƒ(BW upper & lower, block size)

  8. Block Size vs. Cache Measures • Increasing Block Size generally increases Miss Penalty and decreases Miss Rate Miss Penalty Miss Rate Avg. Memory Access Time X = Block Size Block Size Block Size

  9. Implications For CPU • Fast hit check since every memory access • Hit is the common case • Unpredictable memory access time • 10s of clock cycles: wait • 1000s of clock cycles: • Interrupt & switch & do something else • New style: multithreaded execution • How to handle miss (10s of cycles=> HW, 1000s of cycles => SW)?

  10. Four Questions for Memory Hierarchy Designers • Q1: Where can a block be placed in the upper level? (Block placement) • Q2: How is a block found if it is in the upper level? (Block identification) • Q3: Which block should be replaced on a miss? (Block replacement) • Q4: What happens on a write? (Write strategy)

  11. Q1: Where can a block be placed in the upper level? • Direct Mapped: Each block has only one place that it can appear in the cache. • Fully associative: Each block can be placed anywhere in the cache. • Set associative: Each block can be placed in a restricted set of places in the cache. • If there are n blocks in a set, the cache placement is called n-way set associative • What is the associativity of a direct mapped cache?

  12. Associativity Examples(Figure 5.2, pg. 376) Fully associative: Block 12 can go anywhere Direct mapped: Block no. = (Block address) mod (No. of blocks in cache) Block 12 can go only into block 4 (12 mod 8) Set associative: Set no. = (Block address) mod (No. of sets in cache) Block 12 can go anywhere in set 0 (12 mod 4)

  13. Q1: Where can a block be placed in the upper level? Direct Mapped Full Associative Set Associative Block 12 placed in 8 block cache 0 12 31

  14. Q2: How Is a Block Found If It Is in the Upper Level? • The address can be divided into two main parts • Block offset: selects the data from the block offset size = log2(block size) • Block address: tag + index • index: selects set in cache index size = log2(#blocks/associativity) • tag: compared to tag in cache to determine hit tag size = addreess size - index size - offset size Tag Index

  15. Q2: How Is a Block Found If It Is in the Upper Level? • Tag on each block • No need to check index or block offset • Increasing associativity shrinks index, expands tag Block Address Block Address Block offset Tag Index FA: No index DM: No tag

  16. Address organization for Set Associative Cache n-bits of CPU address k-bits of Block Offset j-bits of Index i-bits of Tag n= i + j + k

  17. Cache organization TAG BLOCK 1 N-way set associative means there are N-blocks per Set SET 0 TAG BLOCK N Index used to select Set j SET 2 -1

  18. Cache organization TAG BLOCK 1 SET 0 There must also be a mechanism to in- dicate invalid block data. This is commonly done by attaching a valid bit to the Tag field (not shown). TAG BLOCK N TheTag is associatively compared to all tags in a set. If there is a macth we have a cache hit. j SET 2 -1

  19. Each block is broken into 2 elements. The Block Offset is used to select the which element. k Cache organization TAG BLOCK 1 SET 0 TAG BLOCK N j SET 2 -1 The Tag and Index identify the block & the Block Offset identifies the element.

  20. Q3: Which Block Should be Replaced on a Miss? • Easy for Direct Mapped • Set Associative or Fully Associative: • Random - easier to implement • Least Recently used - harder to implement - may approximate • Miss rates for caches with different size, associativity and replacemnt algorithm. Associativity: 2-way 4-way 8-way Size LRU Random LRU Random LRU Random 16 KB 5.18% 5.69% 4.67% 5.29% 4.39% 4.96% 64 KB 1.88% 2.01% 1.54% 1.66% 1.39% 1.53% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% For caches with low miss rates, random is almost as good as LRU.

  21. Q4: What Happens on a Write? • Write through: The information is written to both the block in the cache and to the block in the lower-level memory. • Write back: The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. • is block clean or dirty? (add a dirty bit to each block) • Pros and Cons of each: • Write through • read misses cannot result in writes to memory, • easier to implement • Always combine with write buffers to avoid memory latency • Write back • Less memory traffic • Perform writes at the speed of the cache

  22. Q4: What Happens on a Write? • Since data does not have to be brought into the cache on a write miss, there are two options: • Write allocate • The block is brought into the cache on a write miss • Used with write-back caches • Hope subsequent writes to the block hit in cache • No-write allocate • The block is modified in memory, but not brought into the cach • Used with write-through caches • Writes have to go to memory anyway, so why bring the block into the cache

  23. Example: Alpha 21064 Data Cache A cache read has 4 steps (1) The address from the cache is divided into the tag, index, and block offset (2) The index selects block (3) The address tag is compared with the tag in the cache, the valid bit is checked, and data to be loaded is selected (4) If the valid bit is set, the data is loaded into the processor If there is a write, the data is also sent to the write buffer

  24. Writes in Alpha 21064 • No write merging vs. write merging in write buffer V V V V 100 1 Buffer fills up on 4 stores to seq addresses 104 1 108 1 112 1 100 1 1 1 1 Seq stores merged to a single block

  25. Split vs. Unified Cache • Unified cache (mixed cache): Data and instructions are stored together (von Neuman architecture) • Split cache: Data and instructions are stored separately (Harvard architecture) • Why do instructions caches have a lower miss ratio? Size Instruction Cache Data Cache Unified Cache 1 KB 3.06% 24.61% 13.34% 2 KB 2.26% 20.57% 9.78% 4 KB 1.78% 15.94% 7.24% 8 KB 1.10% 10.19% 4.57% 16 KB 0.64% 6.47% 2.87% 32 KB 0.39% 4.82% 1.99% 64 KB 0.15% 3.77% 1.35% 128 KB 0.02% 2.88% 0.95%

  26. Example:Split vs. Unified Cache • Which has the lower average memeory access time? • Split cache : 16 KB instructions + 16 KB data • Unified cache: 32 KB instructions + data • Assumptions • Use miss rates from previous chart • Miss penalty is 50 cycles • Hit time is 1 cycle • Load or store hit takes an extra cycle, since there is only one port for instructions and data

  27. Example:Split vs. Unified Cache Average memory-access time = Hit time + Miss rate x Miss penalty AMAT = %instr x (instr hit time + instr miss rate x instr miss penalty) + %data x (data hit time + data miss rate x data miss penalty) For the split cache: AMAT = 75% x (1 + 0.64%x 50) + 25% (1 + 6.74% x 50) = 2.05 For the unified cache AMAT = 75% x (1 + 1.99%x 50) + 25% x (1 + 1.99% x 50) = 2.24 The unified cache has a longer AMAT, even though its miss rate is lower, due to conflicts for instruction and data hazards. What are advantages of a split cache? Of unified cache?

  28. CPU Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time CPU execution clock cycle = Number of cycles for instructions assuming all hits (ignoring memory stalls) Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty) Memory stall Combining Reads & Writes (approx. of the above) clock cycles = Memory accesses x Miss rate x Miss penalty

  29. CPU Performance CPUtime = IC x (CPIexecution + Mem accesses per instruction x Miss rate x Miss penalty) x Clock cycle time (Cache) Misses per instruction = Memory accesses per instruction x Miss rate CPUtime = IC x (CPIexecution + Misses per instruction x Miss penalty) x Clock cycle time

  30. Performance Example Compare CPU time of a processor with and w/o a cache. Hit Rate 98% 50 cycle miss penalty 2 cycles/instruction 1.33 memory ref/instr With cache CPU Time = IC x [ 2 + 1.33 x (1-.98) x 50] x clock cycle time no cache CPU Time = IC x [ 2 + 1.33 x 50 ] x clock cycle time Speedup = 68.5/3.33 = 20.5

  31. 2-Way Set Associative Cache • Features of an 8 KB 2-way set • associative cache • 5 bit block offset • 7 bit index • 22 bit tag • The set associative cache has • extra hardware for • 2 tag comparisons • mux to select data • Compared to the direct mapped • cache, the set associative cache • will tend to have a smaller miss rate • but a larger hit time.

  32. Example: Two different cache organization • See example pp. 387

  33. Improving Cache Performance • AMAT = Hit time + Miss rate x Miss penalty • Improve performance by: 1. Reduce the miss rate (or increase hit rate), 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

  34. Summary • CPU-Memory gap is major performance obstacle for performance, HW and SW • Take advantage of program behavior: locality • Time of program still only reliable performance measure • 4Qs of memory hierarchy • Where can a block be placed? • How is a block found? • Which block should be replaces on a miss? • What happens on a write?

More Related