1 / 49

Review of Memory Hierarchy (Appendix C)

Review of Memory Hierarchy (Appendix C). Outline. Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options Conclusion. Memory Hierarchy Review. So far, we have discussed only about processors CPU Cost/Performance ISA Pipelined Execution

Télécharger la présentation

Review of Memory Hierarchy (Appendix C)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of Memory Hierarchy(Appendix C) CSCI 620

  2. Outline • Memory hierarchy • Locality • Cache design • Virtual address spaces • Page table layout • TLB design options • Conclusion CSCI 620

  3. Memory Hierarchy Review • So far, we have discussed only about processors • CPU Cost/Performance • ISA • Pipelined Execution • ILP • Now for memory systems CSCI 620

  4. Gap grew 50% per year Since 1980, CPU has outpaced DRAM ... Q. How do architects address this gap? A. Put smaller, faster “cache” memories between CPU and DRAM. Create a “memory hierarchy”. Performance (1/latency) CPU 60% per yr 2X in 1.5 yrs 1000 CPU Moore’s Law 100 DRAM 9% per yr 2X in 10 yrs 10 DRAM 1980 1990 2000 Year CSCI 620

  5. Caches PRONUNCIATION:   kash NOUN: 1a. A hiding place used especially for storing provisions. b. A place for concealment and safekeeping, as of valuables. c. A store of goods or valuables concealed in a hiding place: maintained a cache of food in case of emergencies.2.Computer Science A fast storage buffer in the central processing unit of a computer. Also called cache memory. CSCI 620

  6. Advancement of cache memory • 1980: no cache in microprocessors • 1989: First Intel processors with on-chip caches • 1995: 2-level cache, occupies 60% transistors on Alpha 21164 • 2002: IBM experimenting with Main Memory on die(on-chip). CSCI 620

  7. Apple ][ (1977) CPU: 1000 ns DRAM: 400 ns Steve Wozniak Steve Jobs 1977: At one time, DRAM was faster than microprocessors CSCI 620

  8. Memory Hierarchy Design • Until now we have assumed a very ideal memory – All accesses take 1 cycle • Assumes an unlimited size, very fast memory – Fast memory is very expensive – Large amounts of fast memory would be slow! • Tradeoffs • Solution: – Smaller, faster expensive memory close to core – Larger, slower, cheaper memory farther away Speed Cost Size Speed CSCI 620

  9. Registers Instr. Operands Cache Blocks Memory Pages Disk Files Tape Levels of the Memory Hierarchy Upper Level Capacity Access Time Cost Staging Xfer Unit faster CPU Registers 100s Bytes <10s ns prog./compiler 1-8 bytes Cache K Bytes 10-100 ns 1-0.1 cents/bit cache cntl 8-128 bytes Main Memory M Bytes 200ns- 500ns $.0001-.00001 cents /bit OS 512-4K bytes Disk G Bytes, 10 ms (10,000,000 ns) 10 - 10 cents/bit -6 -5 user/operator Mbytes Larger Tape infinite sec-min 10 Lower Level -8 CSCI 620

  10. Managed by compiler Managed by OS, hardware, application Managed by hardware iMac G5 1.6 GHz Memory Hierarchy: Apple iMac G5 Goal: Illusion of large, fast, cheap memory Let programs address a memory space that scales to the disk size, at a speed that is usually as fast as register access CSCI 620

  11. L1 (64K Instruction) 512K L2 iMac’s PowerPC 970: All caches on-chip R e g i s t e r s (1K) L1 (32K Data) CSCI 620

  12. What is a cache? • Small, fast storage used to improve average access time to slow memory• Holds subset of the instructions and data used by current programs• Exploits spatial and temporal locality immediate access (0-1 clock cycles) (8-32 registers) (3 clock cycles) (32 KiB to 128 KB) (128 KB to 12 MB) (10 clock cycles) (256 MiB to 4 GB) (100 clock cycles) (10,000,000 clock cycles) (1 GB to 1 TB) CSCI 620

  13. The Principle of Locality • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Two Different Types of Locality: • Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) • Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) • Last 15 years, HW relied on locality for speed enhancements • Implication of locality: We can predict with reasonable accuracy what instructions and data a program will use in the near future based on its accesses in the recent past It is a property of programs which is exploited in machine design. CSCI 620

  14. Memory System Reality Illusion Faster, Smaller Processor Processor Memory Large & Fast Memory Memory Memory Slower, Larger CSCI 620

  15. Ubiquitous Cache • In computer architecture, almost everything is a cache! – Registers “a cache” on variables – software managed – First-level cache is “a cache” on second-level cache – Second-level cache is “a cache” on memory – Memory is “a cache” on disk (virtual memory) – TLB(Translation Lookaside Buffer) is “a cache” on page table – Branch-prediction a cache on prediction information? CSCI 620

  16. Program Execution Model CSCI 620

  17. Programs with locality behavior ... Bad locality behavior Temporal Locality Memory Address (one dot per access) Spatial Locality Time Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): 168-192 (1971) CSCI 620

  18. Principle of Locality of Reference(Why Cache works?) • Locality • Temporal Locality: referenced again soon • Spatial Locality: nearby items referenced soon • Locality + smaller HW is faster  memory hierarchy • Levels: each smaller, faster, more expensive/byte than level below • Inclusive: data found in top also found in lower levels • Definitions • Upper is closer to processor • Block: minimum, address aligned unit that fits in cache • Block size is always power of 2—1 word, 2 words, 4 words, … • Address = Block frame address + block offset address CSCI 620

  19. Lower Level Memory Upper Level Memory To Processor Block X From Processor Block Y Memory Hierarchy: Terminology • Hit: data appears in some block in the upper level (example: Block X) • Hit Rate: the fraction of memory access found in the upper level • Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: data needs to be retrieved from a block in the lower level (Block Y) • Miss Rate = 1 - (Hit Rate) • Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor • Hit Time << Miss Penalty (500 instructions on 21264!) CSCI 620

  20. Cache Measures • Hit rate: fraction found in that level • So high that usually talk about Miss rate • Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory • Miss penalty: time to replace a block from lower level, including time to copy to and restart CPU—it is an exception • access time: time to access lower level = f (lower level latency) • transfer time: time to transfer block = f (BW between upper & lower levels, block size) • Average Memory-Access Time (AMAT) = Hit time + Miss rate x Miss penalty (ns or clocks) Example: AMAT = 5ns + 0.1*100 = 15ns CSCI 620

  21. Key Points of Memory Hierarchy • Need methods to give illusion of large & fast memory—Is this feasible? • Most programs exhibit both temporal locality and spatial locality • Keep more recently accessed data closer to the processor • Keep multiple contiguous words together in memory blocks • Use smaller, faster memory close to processor – hits are processed quickly; misses require access to larger, slower memory • If hit rate is high, memory hierarchy has access time close to that of highest (fastest) level and size equal to that of lowest (largest) level CSCI 620

  22. 4 Questions for Memory Hierarchy(to be considered in design) • Q1: Where can a block be placed in the upper level? (Block placement) • Q2: How is a block found if it is in the upper level? (Block identification) • Q3: Which block should be replaced on a miss? (Block replacement) • Q4: What happens on a write? (Write strategy) CSCI 620

  23. 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 1 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 2 2 2 2 2 2 1 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 0 0 1 2 3 4 5 6 7 8 9 Q1: Where can a block be placed in the cache? Set associative:block 12 can go anywhere in set 0 (12 mod 4) Direct mapped:block 12 can go only into block 4 (12 mod 8) Fully associative:block 12 can go anywhere • Block 12 placed in 8 block cache: • Fully associative, direct mapped, 2-way set associative • S.A. Mapping = Block number modulo number sets Block no. Block no. Block no. 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Cache Set 0 Set 3 Set 1 Set 2 Block frame address Block no. Q: Block 23 goes into? Memory CSCI 620

  24. Direct Mapped Cache with block size of 1 word CSCI 620

  25. Set Associative(16 way) cache Fully Associative? CSCI 620

  26. Block Address Block Offset Tag Index Q2: How is a block found if it is in the upper level? • Tag on each block • No need to check index or block offset • Increasing associativity shrinks index, expands tag CSCI 620

  27. Q3: Which block should be replaced on a miss? • Easy for Direct Mapped • Set Associative or Fully Associative: • Random • LRU (Least Recently Used) Assoc: 2-way 4-way 8-way Size LRU Rand LRU Rand LRU Rand 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% CSCI 620

  28. Miss Rate for 2-way Set Associative Cache Q3: After a cache read miss, if there are no empty cache blocks, which block should be removed from the cache? A randomly chosen block? Easy to implement, how well does it work? The Least Recently Used (LRU) block? Appealing, but hard to implement for high associativity CSCI 620

  29. Q4: What happens on a write? CSCI 620

  30. Write Miss—word to be written not in cache • On a write miss, we can write into the cache (Make room and write–write allocate) or bypass it and go directly to main memory (write no-allocate) • Write allocate is usually associated with write-back caches • Write no-allocate corresponds to write-through CSCI 620

  31. Lower Level Memory Cache Processor Write Buffer Holds data awaiting write-through to lower level memory Write Buffers for Write-Through Caches Q. Why a write buffer ? A. So CPU doesn’t stall Q. Why a buffer, why not just one register ? A. Bursts of writes are common Q. Are Read After Write (RAW) hazards an issue for write buffer? A. Yes! Drain buffer before next read, or send read after checking write buffer CSCI 620

  32. Classifying Misses( 3C ) • Compulsory -- first reference • Capacity -- a miss because the value was evicted for lack of space(to make room) • Conflict -- a miss because another block with the same mapping needs to be brought in CSCI 620

  33. 5 Basic Cache Optimizations • Reducing Miss Rate • Larger Block size (reduce compulsory misses) • Larger Cache size (reduce capacity misses) • Higher Associativity (reduce conflict misses) • Reducing Miss Penalty • Multilevel Caches • Reducing hit time • Giving Reads Priority over Writes • E.g., Read complete before earlier writes in write buffer CSCI 620

  34. Outline • Memory hierarchy • Locality • Cache design • Virtual address spaces—Virtual Memory • Page table layout • TLB design options • Conclusion CSCI 620

  35. “Physical addresses” of memory locations Data All programs share one address space: The physical address space The Limits of Physical Addressing A0-A31 A0-A31 CPU Memory D0-D31 D0-D31 Machine language programs must be aware of the machine organization No way to prevent a program from accessing any machine resource CSCI 620

  36. Solution: Add a Layer of Indirection “Physical Addresses” “Virtual Addresses” A0-A31 A0-A31 Virtual Physical Address Translation CPU Memory D0-D31 D0-D31 Data User programs run in an standardized virtual address space Address Translation hardware managed by the operating system (OS) maps virtual address to physical memory Hardware supports “modern” OS features: Protection, Translation, Sharing CSCI 620

  37. Three Advantages of Virtual Memory • Translation: • Program can be given consistent view of memory, even though physical memory is scrambled • Makes multithreading reasonable (now used a lot!) • Only the most important part of program (“Working Set”) must be in physical memory. • Contiguous structures (like stacks) use only as much physical memory as necessary yet still grow later. • Protection: • Different threads (or processes) protected from each other. • Different pages can be given special behavior • (Read Only, Invisible to user programs, etc). • Kernel data protected from User programs • Very important for protection from malicious programs • Sharing: • Can map same physical page to multiple users(“Shared memory”) CSCI 620

  38. Virtual Address Space Physical Address Space Page tables encode virtual address spaces A virtual address space is divided into blocks of memory called pages frame frame Page table A machine usually supports pages of a few sizes (MIPS R4000): frame frame The R4000 implements variable page sizes on a per-page. basis, varying from 4 Kbytes to 16 Mbytes A valid page table entry codes physical memory “frame” address for the page CSCI 620

  39. Physical Memory Space Page Table frame frame frame A machine usually supports pages of a few sizes (MIPS R4000): frame virtual address A page table is indexed by a virtual address OS manages the page table A valid page table entry codes physical memory “frame” address for the page Page tables encode virtual address spaces A virtual address space is divided into blocks of memory called pages CSCI 620

  40. Physical Memory Space 12 V page no. offset 12 P page no. offset Details of Page Table Page Table • Page table maps virtual page numbers to physical frames (“PTE” = Page Table Entry) • Virtual memory => treat memory  cache for disk Virtual Address frame frame Page table frame Page Table frame Page Table Base Reg Access Rights V PA index into page table virtual address table located in physical memory Valid bit Physical Address CSCI 620

  41. Two-level Page Tables 32 bit virtual address 31 22 21 12 11 0 P1 index P2 index Page Offset Entire Page table may not fit in memory! A table for 4KB pages for a 32-bit address space has 1M entries Each process needs its own address space! Top-level table wired in main memory Subset of 1024 second-level tables in main memory; rest are on disk or unallocated CSCI 620

  42. Tail pointer: Clear the used bit in the page table VM and Disk: Page replacement policy Page Table Dirty bit: page written. Used bit: set to 1 on any reference dirty used 1 0 ... 1 0 0 1 1 1 0 0 Set of all pages in Memory Freelist Head pointer Place pages on free list if used bit is still clear. Schedule pages with dirty bit set to be written to disk. Architect’s role: support setting dirty and used bits Free Pages CSCI 620

  43. TLB Design Concepts CSCI 620

  44. Translation Look-Aside Buffer (TLB) Translation Look-Aside Buffer (TLB) A small fully-associative cache of mappings from virtual to physical addresses MIPS Address Translation: How does it work? “Physical Addresses” “Virtual Addresses” Virtual Physical A0-A31 A0-A31 CPU Memory D0-D31 D0-D31 Data TLB also contains protection bits for virtual address Fast common case: Virtual address is in TLB, process has permission to read/write it. CSCI 620

  45. TLB caches page table entries. Physical frame address virtual address off page Page Table 2 0 1 physical address 3 V=0 pages either reside on disk or have not yet been allocated. OS handles V=0 “Page fault” off page frame page 2 2 TLB 0 5 The TLB caches page table entries Physical and virtual pages must be the same size! MIPS handles TLB misses in software (random replacement). Other machines use hardware. CSCI 620

  46. Typical TLB--http://en.wikipedia.org/wiki/Translation_lookaside_buffer • Size: 8 - 4,096 entries • Hit time: 0.5 - 1 clock cycle • Miss penalty: 10 - 30 clock cycles • Miss rate: 0.01% - 1% CSCI 620

  47. Summary #1/3: The Cache Design Space Cache Size • Several interacting dimensions • cache size • block size • associativity • replacement policy • write-through vs write-back • write allocation • The optimal choice is a compromise • depends on access characteristics • workload • use (I-cache, D-cache, TLB) • depends on technology / cost • Simplicity often wins Associativity Block Size Bad Factor A Factor B Good Less More CSCI 620

  48. Summary #2/3: Caches • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Temporal Locality: Locality in Time • Spatial Locality: Locality in Space • Three Major Categories of Cache Misses: • Compulsory Misses: sad facts of life. Example: cold start misses. • Capacity Misses: increase cache size • Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! • Write Policy: Write Through vs. Write Back • Today CPU time is a function of (ops, cache misses) vs. just f(ops): affects Compilers, Data structures, and Algorithms CSCI 620

  49. Summary #3/3: TLB, Virtual Memory • Page tables map virtual address to physical address • TLBs are important for fast translation • TLB misses are significant in processor performance • Caches, TLBs, Virtual Memory all understood by examining how they deal with 4 questions: 1) Where can block be placed?2) How is block found? 3) What block is replaced on miss? 4) How are writes handled? CSCI 620

More Related