1 / 44

CES 524 May 6

CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley. Review: Basic Cache Optimizations. Reducing hit time 1. Giving Reads Priority over Writes

adena-chang
Télécharger la présentation

CES 524 May 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CES 524 May 6 • Eleven Advanced Cache Optimizations (Ch 5) • parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley

  2. Review: Basic Cache Optimizations Reducing hit time 1. Giving Reads Priority over Writes • E.g., Read completes before earlier writes in write buffer 2. Lower associativity Reducing Miss Penalty 3. Multilevel Caches Reducing Miss Rate 4. Larger Block size (fewer Compulsory misses) 5. Larger Cache size (fewer Capacity misses) 6. Higher Associativity (fewer Conflict misses)

  3. Reducing hit time Small and simple caches Way prediction Trace caches Increasing cache bandwidth Pipelined caches Multibanked caches Nonblocking caches Reducing Miss Penalty Critical word first Merging write buffers Reducing Miss Rate Compiler optimizations Reducing miss penalty or miss rate via parallelism Hardware prefetching Compiler prefetching Eleven Advanced Cache Optimizations

  4. 1. Fast Hit Times via Small, Simple Caches • Index tag memory and then compare takes time • Small cache can help hit time since smaller memory takes less time to index to find right set of block(s) in cache • E.g., fast L1 caches were same smail size for 3 generations of AMD microprocessors: K6, Athlon, and Opteron • Also, having a L2 cache small enough to fit on-chip with the processor avoids time penalty of going off chip (~10X longer data latency off-chip) • Simple direct mapping • Overlap tag check with data transmission since no choice (kill data out if tag bad) • Access time estimate for 90 nm using CACTI model 4.0 • Median ratios of access time relative to the direct-mapped caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

  5. 2. Fast Hit Times via Way Prediction • How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way Set Assoc. cache? • Way prediction: keep extra bits in cache to predict the “way,” or block within the set, of next cache access. • Multiplexor is set early to select desired block, only 1 tag comparison performed that clock cycle in parallel with reading the cache data • Miss  check other blocks for matches in next clock cycle • Accuracy  85% • Drawback: hard to tune CPU pipeline if hit time varies from 1 or 2 cycles

  6. 4: Increase Cache Bandwidth by Pipelining • Pipeline cache access to maintain bandwidth, even though pipe gives higher latency for each access. • Number of instruction cache access pipeline stages: 1 for Pentium 2 for Pentium Pro through Pentium III 4 for Pentium 4 - greater penalty on mispredicted branches: restart pipelined stream of memory addresses at new PC - more clock cycles between the issue of a load and the availability of the loaded data

  7. 5. Increase Cache Bandwidth: Non-Blocking Caches • Non-blocking cacheor lockup-free cacheallow data cache to continue to supply cache hits during a miss • helps if Full/Empty bits on all registers (to allow execution to go on until missed datum is actually used) or out-of-order completion • requires multi-bank memories for the non-blocking cache • “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests • “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses - Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses - Requires multiple main memory banks (otherwise cannot support) - Penium Pro allows 4 outstanding memory misses

  8. Integer Floating Point Value of Hit Under Miss for SPEC If i = 0 1 2 64 • FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 • Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 • 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92 i=1->0 i=2->1 i=64->2 Base, i=64 “Hit under i Misses” AMAT = average miss access time

  9. 6: Increase Cache Bandwidth via Multiple Banks • Rather than treat the cache as a single monolithic block, divide it into independent banks that can support simultaneous accesses • E.g.,T1 (“Niagara”) L2 has 4 banks • Banking works best when accesses naturally spread themselves across banks  mapping of addresses to banks affects behavior of memory system • A simple mapping that works well is “sequential interleaving” => the next block of memory goes to the next bank of memory • Spread memory block indices sequentially across banks • E.g., if there are 4 banks, Bank 0 has all blocks whose index modulo 4 is 0; bank 1 has all blocks whose index modulo 4 is 1; …

  10. block 7. Reduce Miss Penalty: Early Restart and Critical Word First • Do not wait for full block before restarting CPU • Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution • Spatial locality  tend to want next sequential word, so first access to a block is normally to 1st word, but next is to 2nd word, which may stall again and so on, so benefit from early restart alone is not clear • Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block • Long blocks more popular today  Critical Word 1st Widely Used

  11. 8. Merge Multiple Adjacent New Words in Write Buffer to Reduce Miss Penalty • Write buffer allows processor to continue without waiting to finish write to next lower memory/cache • If buffer contains blocks of modified words, not just a single word per entry, addresses can be checked to see if the address of a newly written datum matches an address in an existing write buffer entry • If so, new datum is combined with that existing entry • For write-through caches, can increase block sizes of writes to lower memory from writes to individual words to writes to several sequential words, which allows more efficient use of memory system • The Sun T1 (Niagara) processor, among many others, uses write merging

  12. 9. Reduce Misses by Compiler Optimizations • McFarling [1989] used software to reduce cache misses by 75% for an 8KB dm cache (4 byte blocks) • Instructions • Reorder procedures in memory so as to reduce conflict misses • Profiling to look at conflicts (using tools they developed) • Data • Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays • Loop Interchange: change nesting of loops to access data in the order that they are stored in memory • Loop Fusion: Combine 2 non-dependent loops with the same looping structure so more accesses to all common variables in each iteration • Blocking: Improve temporal locality by accessing “blocks” of data (equal cache block size) repeatedly vs. going down whole columns or rows and moving from cache block to cache block rapidly

  13. Merging Arrays Example /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reduce conflicts between val & key; and improve spatial locality

  14. Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After, since x[i][j+1] follows x[i][j] in memory*/ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improve spatial locality

  15. Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j]= 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j]+ c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} Two misses per access to a & c vs. one miss per access; improve spatial locality

  16. Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; • Two Inner Loops: • Read all NxN elements of z[] • Read N elements of 1 row of y[] repeatedly • Write N elements of 1 row of x[] • Capacity Misses are a function of N & Cache Size: • 2N3 + N2 => (assuming no conflict; otherwise …) • Idea: Compute on B x B submatrix fitting in 1 cache block For large N, these long accesses repeatedly flush cache blocks that are needed again soon Better way

  17. Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; • B called Blocking Factor • Capacity Misses fall from 2N3 + N2 to 2N3/B +N2 • Do Conflict Misses fall also?

  18. Reduce Conflict Misses by Blocking • Conflict misses in non-F.A. caches vs. Blocking size • Lam et al [1991] found a blocking factor of 24 had a fifth the misses vs. 48 despite both fitting in one cache block (F.A. Cache)

  19. Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

  20. Reduce Misses by Hardware Prefetching • Prefetching relies on having extra memory bandwidth that can be used without penalty since some prefetched values unused • Instruction Prefetching • Typically, a CPU fetches 2 blocks on a miss: the requested block and the next consecutive block. • Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer • Data Prefetching • Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different 4 KB pages • Prefetching invoked whenever 2 successive L2 cache misses to 1 page, if distance between those cache blocks is < 256 bytes

  21. Reduce Misses by Software Prefetching Data • Data Prefetch • Load data into register (HP PA-RISC loads) • Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) • Special prefetching instructions cannot cause faults;a form of speculative execution • Issuing Prefetch Instructions takes time • Is cost of issuing prefetch instructions < savings from reduced misses? • Higher superscalar reduces difficulty of issue bandwidth

  22. Compiler Optimization vs. Memory Hierarchy Search • Compiler tries to figure out memory hierarchy optimizations • New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer • “Auto-tuner” targeted to numerical method • E.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

  23. Mflop/s Best: 4x2 Reference Mflop/s Sparse Matrix – Search for Blocking for finite element problem [Im, Yelick, Vuduc, 2005]

  24. Why Matrix Multiplication? • An important kernel in scientific problems • Appears in many linear algebra algorithms • Closely related to other algorithms, e.g., transitive closure on a graph using Floyd-Warshall • Optimization ideas can be used in other problems • The best case for optimization payoffs • The most-studied algorithm in high performance computing

  25. Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

  26. Outline • Idealized and actual costs in modern processors • Memory hierarchies • Case Study: Matrix Multiplication • Simple cache model • Warm-up: Matrix-vector multiplication • Blocking algorithms • Other techniques • Automatic Performance Tuning

  27. Note on Matrix Storage • A matrix is a 2-D array of elements, but memory addresses are “1-D” • Conventions for matrix layout • by column, or “column major” (Fortran default); A(i,j) at A+i+j*n • by row, or “row major” (C default) A(i,j) at A+i*n+j • recursive (later) • Column major (for now) Column major matrix in memory Row major Column major 0 5 10 15 0 1 2 3 1 6 11 16 4 5 6 7 2 7 12 17 8 9 10 11 3 8 13 18 12 13 14 15 4 9 14 19 16 17 18 19 Blue row of matrix is stored in red cachelines cachelines Figure source: Larry Carter, UCSD

  28. Computational Intensity: Key to algorithm efficiency Machine Balance:Key to machine efficiency Using a Simple Model of Memory to Optimize • Assume just 2 levels in the hierarchy, fast (DRAM) and slow (cache) • All data initially in slow memory • m = number of memory elements (words) moved between fast and slow memory • tm = time per slow memory operation • f = number of arithmetic operations • tf = time per arithmetic operation << tm • q = f / m average number of flops per slow memory access • Minimum possible time = f* tf when all data in fast memory • Actual time • f * tf + m * tm = f * tf * (1 + tm/tf * 1/q) • Larger q means time closer to minimum f * tf • q  tm/tfneeded to get at least half of peak speed

  29. Warm up: Matrix-vector multiplication {implements y = y + A*x} for i = 1:n for j = 1:n y(i) = y(i) + A(i,j)*x(j) A(i,:) + = * y(i) y(i) x(:)

  30. Warm up: Matrix-vector multiplication {read x(1:n) into fast memory} {read y(1:n) into fast memory} for i = 1:n {read row i of A into fast memory} for j = 1:n y(i) = y(i) + A(i,j)*x(j) {write y(1:n) back to slow memory} • m = number of slow memory refs = 3n + n2 • f = number of arithmetic operations = 2n2 • q = f / m ~ 2 • Matrix-vector multiplication limited by slow memory speed

  31. Modeling Matrix-Vector Multiplication machine balance (q must be at least this for ½ peak speed) • Compute time for n x n = 1000x1000 matrix • Time f * tf + m * tm = f * tf * (1 + tm/tf * 1/q) = 2*n2 * tf * (1 + tm/tf * 1/2)

  32. Simplifying Assumptions • What simplifying assumptions did we make in this analysis? • Ignored parallelism in processor between memory and arithmetic within the processor • Sometimes drop arithmetic term in this type of analysis • Assumed fast memory was large enough to hold three vectors • Reasonable if we are talking about any level of cache • Not if we are talking about registers (~32 words) • Assumed the cost of a fast memory access is 0 • Reasonable if we are talking about registers • Not necessarily if we are talking about cache (1-2 cycles for L1) • Memory latency is constant • Could simplify even further by ignoring memory operations in X and Y vectors • Mflop rate/element = 2 / (2* tf + tm)

  33. Validating the Model • How well does the model predict actual performance? • Actual DGEMV: Most highly optimized code for the platform • Model sufficient to compare across machines • But under-predicting on most recent ones due to latency estimate

  34. Naïve Matrix Multiply {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) Algorithm has 2*n3 = O(n3) Flops and operates on 3*n2 words of memory q potentially as large as 2*n3 / 3*n2 = O(n) A(i,:) C(i,j) C(i,j) B(:,j) = + *

  35. Naïve Matrix Multiply {implements C = C + A*B} for i = 1 to n {read row i of A into fast memory} for j = 1 to n {read C(i,j) into fast memory} {read column j of B into fast memory} for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory} A(i,:) C(i,j) C(i,j) B(:,j) = + *

  36. Naïve Matrix Multiply • Number of slow memory references on unblocked matrix multiply m = n3 to read each column of B n times + n2 to read each row of A once + 2n2 to read and write each element of C once = n3 + 3n2 • So q = f / m = 2n3 / (n3 + 3n2) ~ 2 for large n, no improvement over matrix-vector multiply A(i,:) C(i,j) C(i,j) B(:,j) = + *

  37. Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

  38. Naïve Matrix Multiply on RS/6000 12000 would take 1095 years T = N4.7 Size 2000 took 5 days O(N3) performance would have constant cycles/flop Performance looks like O(N4.7) Slide source: Larry Carter, UCSD

  39. Naïve Matrix Multiply on RS/6000 Page miss every iteration TLB miss every iteration Cache miss every 16 iterations Page miss every 512 iterations Slide source: Larry Carter, UCSD

  40. Blocked (Tiled) Matrix Multiply Consider A,B,C to be N-by-N matrices of b-by-b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} A(i,k) C(i,j) C(i,j) = + * B(k,j)

  41. Blocked (Tiled) Matrix Multiply Recall: m is amount memory traffic between slow and fast memory matrix has nxn elements, and NxN blocks each of size bxb f is number of floating point operations, 2n3 for this problem q = f / m is our measure of algorithm efficiency in the memory system So: m = N*n2 read each block of B N3 times (N3 * b2 = N3 * (n/N)2 = N*n2) + N*n2 read each block of A N3 times + 2n2 read and write each block of C once = (2N + 2) * n2 So computational intensity q = f / m = 2n3 / ((2N + 2) * n2) ~= n / N = b for large n So we can improve performance by increasing the blocksize b Can be much faster than matrix-vector multiply (q=2)

  42. Using Analysis to Understand Machines The blocked algorithm has computational intensity q ~= b • The larger the block size, the more efficient our algorithm will be • Limit: All three blocks from A,B,C must fit in fast memory (cache), so we cannot make these blocks arbitrarily large • Assume your fast memory has size Mfast 3b2 <= Mfast, so q ~= b <= sqrt(Mfast/3) • To build a machine to run matrix multiply at 1/2 peak arithmetic speed of the machine, we need a fast memory of size Mfast >= 3b2 ~= 3q2 = 3(tm/tf)2 • This size is reasonable for L1 cache, but not for register sets • Note: analysis assumes it is possible to schedule the instructions perfectly

  43. Limits to Optimizing Matrix Multiply • The blocked algorithm changes the order in which values are accumulated into each C[i,j] by applying associativity • Get slightly different answers from naïve code, because of roundoff - OK • The previous analysis showed that the blocked algorithm has computational intensity: q ~= b <= sqrt(Mfast/3) • There is a lower bound result that says we cannot do any better than this (using only associativity) Theorem (Hong & Kung, 1981): Any reorganization of this algorithm (that uses only associativity) is limited to q = O(sqrt(Mfast))

More Related