1 / 34

Memory and cache

Memory and cache. CPU. Memory. I/O. The Memory Hierarchy. ~2ns ~4-5ns ~30ns ~220ns+ >1ms (~6ms). Registers. Primary cache. Larger. Secondary cache. Faster. More Expensive. Main memory. Magnetic disk. Cache & Locality. CPU. Main Memory.

sancho
Télécharger la présentation

Memory and cache

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory and cache CPU Memory I/O

  2. The Memory Hierarchy • ~2ns • ~4-5ns • ~30ns • ~220ns+ • >1ms (~6ms) Registers Primary cache Larger Secondary cache Faster More Expensive Main memory Magnetic disk

  3. Cache & Locality CPU Main Memory • Cache sits between the CPU and main memory • Invisible to the CPU • Only useful if recently used items are used again • Fortunately, this happens a lot. We call this property locality of reference. Cache

  4. Locality of reference • Temporal locality • Recently accessed data/instructions are likely to be accessed again. • Most program time is spent in loops • Arrays are often scanned multiple times • Spatial locality • If I access memory address n, I am likely to then access another address close to n (usually n+1, n+2, or n+4) • Linear execution of code • Linear access of arrays

  5. How a cache exploits locality • Temporal – When an item is accessed from memory it is brought into the cache • If it is accessed again soon, it comes from the cache and not main memory • Spatial – When we access a memory word, we also fetch the next few words of memory into the cache • The number of words fetched is the cache line size, or the cache block size for the machine

  6. Fully-associative cache Cache & memory architecture 8-bit memory address: x00 – xFF(256 words) Cache size = 64 words, divided into 16-word cache lines Memory is divided into 16-word blocks x00 Block0 x0F x10 Block1 Tag Word: 0 1 2 3 4 5 6 7 8 9 A B C D E F x1F 0 x20 Block2 x2F 2 ... 7 xF0 BlockF F xFF

  7. Fully-associative cache x00 Note that the first four bits of the address are the block number… Block0 x0F x10 Block1 Tag Word: 0 1 2 3 4 5 6 7 8 9 A B C D E F x1F 0 x20 Block2 x2F 2 ... 7 xF0 BlockF F xFF

  8. Parsing the address x00 Word Tag Block0 0 1 0 1 0 1 1 0 Address: x0F x10 Block1 x1F Tag Word: x20 Block2 0 1 2 3 4 5 6 7 8 9 A B C D E F 0 x2F ... 2 xF0 BlockF 7 xFF F

  9. Direct-Mapped Cache x00 Tag Line Word Block0 0 1 0 1 0 1 1 0 Address: x0F x10 Block1 x1F Line Tag Word: x20 Block2 0 1 2 3 4 5 6 7 8 9 A B C D E F 0 01 x2F Block 3 1 10 x40 2 Block4 11 x4F 3 10 …

  10. Direct-Mapped Addresses Tag Line Word 0 1 0 1 0 1 1 0 Address: Do we need to store the entire 4-bit block number with each cache line?

  11. Set Associative Cache x00 Tag Set Word Block0 0 1 0 1 0 1 1 0 Address: x0F x10 Block1 x1F Set Tag Word: x20 Block2 0 1 2 3 4 5 6 7 8 9 A B C D E F 001 x2F 0 Block 3 101 x40 Block4 110 1 x4F 101 …

  12. Cache mapping • Direct mapped – each memory block can occupy one and only one cache block • Example: • Cache block size: 16 words • Memory = 64K (4K blocks) • Cache = 2K (128 blocks)

  13. Direct Mapped Cache • Memory block n occupies cache block (n mod 128) • Consider address $2EF4001011101111 0100block: $2EF = 751 word: 4 • Cache:00101 1101111 0100tag: 5 block: 111 word: 4

  14. Fully Associative Cache • More efficient cache utilization • No wasted cache space • Slower cache search • Must check the tag of every entry in the cache to see if the block we want is there.

  15. Set-associative mapping • Blocks are grouped into sets • Each memory block can occupy any block in its set • This example is 2-way set-associative • Which of the two blocks do we replace?

  16. Cache write policies • As long as we are only doing READ operations, the cache is an exact copy of a small part of the main memory • When we write, should we write to cache or memory? • Write through cache – write to both cache and main memory. Cache and memory are always consistent • Write back cache – write only to cache and set a “dirty bit”. When the block gets replaced from the cache, write it out to memory. When might the write-back policy be dangerous?

  17. Replacement algorithms • Random • Oldest first • Least accesses • Least recently used (LRU): replace the block that has gone the longest time without being referenced. • This is the most commonly-used replacement algorithm • Easy to implement with a small counter…

  18. Implementing LRU replacement • Suppose we have a 4-way set associative cache… • Hit: • Increment lower counters • Reset counter to 00 • Miss • Replace the 11 • Set to 00 • Increment all other counters Block 0 11 Block 1 10 Set 0 Block 2 01 Block 3 00

  19. Interactions with DMA • If we have a write-back cache, DMA must check the cache before acting. • Many systems simply flush the cache before DMA write operations • What if we have a memory word in the cache, and the DMA controller changes that word? • Stale data • We keep a valid bit for each cache line. When DMA changes memory, it must also set the valid bit to 0 for that cache line. • “Cache coherence”

  20. Typical Modern Cache Architecture • L0 cache • On chip • Split 16 KB data/16 KB instructions • L1 cache • On chip • 64 KB unified • L2 cache • Off chip • 128 KB to 16+ MB

  21. Memory Interleaving • Memory is organized into modules or banks • Within a bank, capacitors must be recharged after each read operation • Successive reads to the same bank are slow Non-interleaved Interleaved Module Byte Byte Module 0000 0002 0004 0006 … 00F8 00FA 00FC 00FE 0100 0102 0104 0106 … 01F8 01FA 01FC 01FE 0200 0202 0204 0206 … 02F8 02FA 02FC 02FE 0000 0006 000C 0012 … 02E8 02EE 02F4 02FA 0002 0008 000E 0014 … 02EA 02F0 02F6 02FC 0004 000A 0010 0016 … 02EC 02F2 02F8 02FE

  22. Measuring Cache Performance • No cache: Often about 10 cycles per memory access • Simple cache: • tave = hC + (1-h)M • C is often 1 clock cycle • Assume M is 17 cycles (to load an entire cache line) • Assume h is about 90% • tave = .9 (1) + (.1)17 = 2.6 cycles/access • What happens when h is 95%?

  23. Multi-level cache performance • tave = h1C1 + (1-h1) h2C2 + (1-h1) (1-h2) M • h1 = hit rate in primary cache • h2 = hit rate in secondary cache • C1 = time to access primary cache • C2 = time to access secondary cache • M = miss penalty (time to load an entire cache line from main memory)

  24. Virtual Memory

  25. Virtual Memory: Introduction • Motorola 68000 has 24-bit memory addressing and is byte addressable • Can address 224 bytes of memory – 16 MB • Intel Pentiums have 32-bit memory addressing and are byte addressable • Can address 232 bytes of memory – 4 GB • What good is all that address space if you only have 256MB of main memory (RAM)? • How do you run a program that is LARGER than 256MB? • Examples: • Unreal Tournament – 2.4 gibabytes • Neverwinter Nights – 2 gigabytes • In modern computers it is possible to use more memory than the amount physically available in the system • Memory not currently being used is temporarily stored on magnetic disk • Essentially, the main memory acts as a cache for the virtual memory on disk.

  26. Working Set • Not all of a program needs to be in memory while you are executing it: • Error handling routines are not called very often. • Character creation generally only happens at the start of the game… why keep that code easily available? • Working set is the memory that is consumed at any moment by a program while it is running. • Includes stack, allocated memory, active instructions, etc. • Examples: • Unreal Tournament – 100MB (requires 128MB) • Internet Explorer – 20MB

  27. Thrashing Diagram • Why does paging work?Locality model • Process migrates from one locality to another. • Localities may overlap. • Why does thrashing occur? size of locality > total memory size • What should we do? • suspend processes!

  28. Thrashing • If a process does not have enough frames to hold its current working set, the page-fault rate is very high • Thrashing • a process is thrashing when it spends more time paging than executing • w/ local replacement algorithms, a process may thrash even though memory is available • w/ global replacement algorithms, the entire system may thrash • Less thrashing in general, but is it fair?

  29. Program Structure • How should we arrange memory references to large arrays? • Is the array stored in row-major or column-major order? • Example: • Array A[1024, 1024] of type integer • Page size = 1K • Each row is stored in one page • System has one frame • Program 1 fori := 1 to 1024 doforj := 1 to 1024 do A[i,j] := 0;1024 page faults • Program 2 for j := 1 to 1024 do fori := 1 to 1024 doA[i,j] := 0;1024 x 1024 page faults

  30. CS Reality #3 • Memory Matters • Memory is not unbounded • It must be allocated and managed • Many applications are memory dominated • Memory referencing bugs especially pernicious • Effects are distant in both time and space • Memory performance is not uniform • Cache and virtual memory effects can greatly affect program performance • Adapting program to characteristics of memory system can lead to major speed improvements

  31. IA32 and GCC double recip (int denom) { return 1.0 /(double) denom; } void do_nothing () {} void test (int denom) { double r1, r2; int t1, t2; r1 = recip(denom); r2 = recip(denom); t1 = (r1 = = r2); do_nothing(); t2 = (r1 = = r2); printf(“t1: r1 %c= r2\n, t1 ? ‘=‘ : ‘!’); printf(“t2: r1 %c= r2\n, t2 ? ‘=‘ : ‘!’); } t1: r1 != r2; t2: r1 = = r2; • IA32 processors, like most other processors, have special memory elements (registers) for holding floating-point values as they are being computed and used. • IA32 uses special 80-bit extended precision floating-point registers • IA32/gcc stores doubles as 64 bits in memory. Numbers are converted as they are stored in memory • Whenever a function call is made, register values may be stored in memory (callee save)

  32. Memory Referencing Errors • C and C++ do not provide any memory protection • Out of bounds array references • Invalid pointer values • Abuses of malloc/free • Can lead to nasty bugs • Whether or not bug has any effect depends on system and compiler • Action at a distance • Corrupted object logically unrelated to one being accessed • Effect of bug may be first observed long after it is generated • How can I deal with this? • Program in Java, Lisp, or ML • Understand what possible interactions may occur • Use or develop tools to detect referencing errors

  33. Memory Performance Example • Implementations of Matrix Multiplication • Multiple ways to nest loops /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } }

  34. Too big for L1 Cache Too big for L2 Cache 160 140 120 ijk 100 ikj jik 80 jki kij 60 kji 40 20 0 matrix size (n) Matmult Performance multiplications per time unit

More Related