1 / 23

Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory

Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory. Michael B. Greenwald Computer Architecture CIS 501 Fall 1999. Administration. Homework #3: goal-reduced. Prob 1: 20 points. Prob 2: 40 Out of email contact should end after the weekend. Improving Cache Performance.

Télécharger la présentation

Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 11:Memory Hierarchy:Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999

  2. Administration • Homework #3: goal-reduced. • Prob 1: 20 points. Prob 2: 40 • Out of email contact should end after the weekend.

  3. Improving Cache Performance • Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) • Improve performance by: 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

  4. 5th Miss Penalty & L2 Cache • L2 Equations AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1x (Hit TimeL2 + Miss RateL2+ Miss PenaltyL2) • Definitions: • Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) • Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU(Miss RateL1 x Miss RateL2) • Global Miss Rate is what matters

  5. Reducing Misses: Which apply to L2 Cache? • Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations … But: must be careful about CPU pin bandwidth when you go off-chip (Perl &Sites [OSDI96])

  6. L2 cache block size & A.M.A.T. • 32KB L1, 8 byte path to memory

  7. Reducing Miss Penalty Summary • Five techniques • Read priority over write on miss • Subblock placement • Early Restart and Critical Word First on miss • Non-blocking Caches (Hit under Miss, Miss under Miss) • Second Level Cache • Can be applied recursively to Multilevel Caches • Danger is that time to DRAM will grow with multiple levels in between (miss rate is higher for lower levels, also > % writes) • First attempts at L2 caches can make things worse, since increased worst case is worse

  8. What is the Impact of What You’ve Learned About Caches? • 1960-1985: Speed = ƒ(no. operations) • 1990 • Pipelined Execution & Fast Clock Rate • Out-of-Order execution • Superscalar Instruction Issue • 1998: Speed = ƒ(non-cached memory accesses) • Superscalar, Out-of-Order machines hide L1 data cache miss (­5 clocks) but not L2 cache miss (­50 clocks)?

  9. Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0Higher Associativity + – 1Victim Caches + 2Pseudo-Associative Caches + 2HW Prefetching of Instr/Data + 2Compiler Controlled Prefetching + 3Compiler Reduce Misses + 0 Priority to Read Misses + 1Subblock Placement + + 1Early Restart & Critical Word 1st + 2Non-Blocking Caches + 3Second Level Caches + 2 miss rate miss penalty

  10. Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

  11. 1. Fast Hit times via Small and Simple Caches • Coupled to clock rate • Direct Mapped, faster and allows overlap tag check and data load. • On chip (Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache?)

  12. 2. Fast hits by Avoiding Address Translation • Every memory reference must be translated from virtual address (logical, per-process, name) to physical address (where memory is currently located). • Lookup in page table (itself in memory and on disk) based on high order bits (“page”). • Why not just send virtual address to cache? • Called Virtually Addressed Cacheor just Virtual Cache vs. Physical Cache

  13. Virtually Addressed Caches CPU CPU CPU VA VA VA VA Tags $ PA Tags TB $ TB VA PA PA L2 $ TB $ MEM PA PA MEM MEM Overlap $ access with VA translation: requires $ index to remain invariant across translation Conventional Organization Virtually Addressed Cache Translate only on miss Synonym Problem

  14. 2. Fast hits by Avoiding Address Translation • Why aren’t all caches virtual? There are problems: • Every time process is switched logically must flush the cache; otherwise get false hits • Cost is time to flush + “compulsory” misses from empty cache • Dealing with aliases(sometimes called synonyms); Two different virtual addresses map to same physical address • I/O must interact with cache, so need virtual address: what if not currently running process? • Solution to cache flush • Addprocess identifier tagthat identifies process as well as address within process: can’t get a hit if wrong process • Solution to aliases • Force aliases to be identical in the last N bits of their virtual addresses (called page coloring). If direct mapped and indexed by > N lower bits, then guarantee that no aliases simultaneously in the cache.

  15. 2. Fast Cache Hits by Avoiding Translation: Process ID impact • Black is uniprocess • Light Gray is multiprocess when flush cache • Dark Gray is multiprocess when use Process ID tag • Y axis: Miss Rates up to 20% • X axis: Cache size from 2 KB to 1024 KB

  16. 2. Fast Cache Hits by Avoiding Translation : Index with Physical Portion of Address • If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag (won’t change) • Limits DM cache to page size: what if want bigger caches? • Higher associativity moves barrier to right (even > 8-way) • Page coloring (require Paddr of Vaddr to match low N bits). Page Address Page Offset Address Tag Block Offset Index

  17. 3. Fast Hit Times Via Pipelined Writes • Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update • Only STORES in the pipeline; empty during a missStore r2, (r1) Check r1Add --Sub --Store r4, (r3) M[r1]<-r2& check r3 • In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer

  18. 4. Fast Writes on Misses Via Small Subblocks • If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately • Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again. • Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on. • Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set • Doesn’t work with write back due to last case

  19. Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0Higher Associativity + – 1Victim Caches + 2Pseudo-Associative Caches + 2HW Prefetching of Instr/Data + 2Compiler Controlled Prefetching + 3Compiler Reduce Misses + 0 Priority to Read Misses + 1Subblock Placement + + 1Early Restart & Critical Word 1st + 2Non-Blocking Caches + 3Second Level Caches + 2 Small & Simple Caches – + 0Avoiding Address Translation + 2Pipelining Writes + 1 miss rate miss penalty hit time

  20. Main Memory Background • Performance of Main Memory: • Latency: Cache Miss Penalty • Access Time: time between request and word arrives • Cycle Time: time between requests • Bandwidth: I/O & Large Block Miss Penalty (L2) • Main Memory is DRAM: Dynamic Random Access Memory • Dynamic since needs to be refreshed periodically (8 ms, 1% time) • Addresses divided into 2 halves (Memory as a 2D matrix): • RAS or Row Access Strobe • CAS or Column Access Strobe • Cache uses SRAM: Static Random Access Memory • No refresh (6 transistors/bit vs. 1 transistorSize: DRAM/SRAM ­ 4-8, Cost/Cycle time: SRAM/DRAM ­ 8-16

  21. Core Memory • “Out-of-Core”, “In-Core,” “Core Dump”? • Non-volatile, magnetic • Lost to 4 Kbit DRAM • Access time 750 ns, cycle time 1500-3000 ns

  22. DRAM logical organization (4 Mbit) Column Decoder … • Square root of bits per RAS/CAS Data In D Sense Amps & I/O 1 1 Data Out Bit Line Q Memory Array A0…A1 0 Row Decoder Address Buffer (2,048 x 2,048) Storage W ord Line Cell

  23. Column Addr ess DRAM physical organization (4 Mbit) … 8 I/Os I/O I/O I/O I/O Row D Addr ess … Block Block Block Block Row Dec. Row Dec. Row Dec. Row Dec. 9 : 512 9 : 512 9 : 512 9 : 512 Q 2 I/O I/O I/O I/O … 8 I/Os Block 0 Block 3

More Related