1 / 66

CS294: Communication-Avoiding Algorithms www. cs.berkeley.edu /~ odedsc /CS294

CS294: Communication-Avoiding Algorithms www. cs.berkeley.edu /~ odedsc /CS294. Jim Demmel EECS & Math Departments. Oded Schwartz EECS Department. Why avoid communication? (1/2). Algorithms have two costs (measured in time or energy): Arithmetic (FLOPS)

svea
Télécharger la présentation

CS294: Communication-Avoiding Algorithms www. cs.berkeley.edu /~ odedsc /CS294

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS294: Communication-Avoiding Algorithmswww.cs.berkeley.edu/~odedsc/CS294 Jim Demmel EECS & Math Departments Oded Schwartz EECS Department

  2. Why avoid communication? (1/2) Algorithms have two costs (measured in time or energy): Arithmetic (FLOPS) Communication: moving data between levels of a memory hierarchy (sequential case) processors over a network (parallel case). CPU Cache CPU DRAM CPU DRAM DRAM CPU DRAM CPU DRAM

  3. Why avoid communication? (2/2) • Running time of an algorithm is sum of 3 terms: • # flops * time_per_flop • # words moved / bandwidth • # messages * latency communication • Time_per_flop << 1/ bandwidth << latency • Gaps growing exponentially with time [FOSC] 59% • Goal : reorganize algorithmsto avoid communication • Between all memory hierarchy levels • L1 L2 DRAM network, etc • Very largespeedups possible • Energy savings too!

  4. President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget Request to Congress: “New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.” FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages 65-67. CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD) “Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)

  5. Outline for today • Jim Demmel • Motivation • Course scope, goals and organization • Reading List Topics • Survey CA “Direct” and “Iterative” Linear Algebra • Oded Schwartz • Lower bounds for direct linear algebra

  6. Course Scope and Goals • Survey prior and ongoing work • Identify useful techniques, successes, open problems • Explore new problem classes in which minimizing communication has not been attempted • Scope • Theory: lower bounds, designing algorithms that attain them (or greatly reduce comm.) • Ex: Both computer science and applied math (“13 motifs”) • Software: automating construction of new algorithms • Ex: Autotuning, synthesis • Hardware: better understanding of communication, energy costs, how architectures/trends impact algorithms • Ex: Heterogeneity • Applications: applying new algorithms to important applications • Ex: Stroke simulation, AMR, quantum chemistry • Have fun!

  7. 7 Dwarfs of High Performance Computing (HPC) Monte Carlo 7

  8. 7 Dwarfs – Are they enough? Monte Carlo 8

  9. 13 Motifs (nee “Dwarf”) of Parallel Computing Popularity: (Red Hot/Blue Cool) Monte Carlo 9

  10. Motifs in ParLab Applications (Red Hot/Blue Cool) • What happened to Monte Carlo? 10

  11. Course Organization • Lectures on prior techniques • Lower bounds, algorithms, SW tools, HW models • Reading list on web page • Weekly reading assignments • Students choose papers to read/present • Possible Class Projects • Lower bounds • Algorithms (invention/analysis/implementation) • Software tools (build/use) • Hardware (modeling/analysis) • Presentation(eventually publication!)

  12. Reading List Topics (so far) • Communication lower bounds for linear algebra • Communication avoiding algorithms for linear algebra • Fast Fourier Transform • Graph Algorithms • Sorting • Software Generation • Data Structures • Lower bounds for Searching • Dynamic Programming • Work stealing

  13. Outline for Linear Algebra • “Direct” Linear Algebra • Lower bounds on communication for linear algebra problems like Ax=b, least squares, Ax = λx, SVD, etc • New algorithms that attain these lower bounds • Not in libraries like Sca/LAPACK (yet!) • Large speed-ups possible • Autotuning to find optimal implementation • Implications for architectural scaling • How flop rate, bandwidths, latencies, memory sizes need to scale to maintain balance • Ditto for “Iterative” Linear Algebra

  14. Lower bound for all “direct” linear algebra • Let M = “fast” memory size (per processor) • #words_moved(per processor) = (#flops (per processor) / M1/2) • #messages_sent(per processor) = (#flops (per processor) / M3/2) • Parallel case: assume either load or memory balanced • Holds for • Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … • Some whole programs (sequences of these operations, no matter how individual ops are interleaved, egAk) • Dense and sparse matrices (where #flops << n3 ) • Sequential and parallel algorithms • Some graph-theoretic algorithms (eg Floyd-Warshall)

  15. Lower bound for all “direct” linear algebra • Let M = “fast” memory size (per processor) • #words_moved(per processor) = (#flops (per processor) / M1/2) • #messages_sent ≥ #words_moved / largest_message_size • Parallel case: assume either load or memory balanced • Holds for • Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … • Some whole programs (sequences of these operations, no matter how individual ops are interleaved, egAk) • Dense and sparse matrices (where #flops << n3 ) • Sequential and parallel algorithms • Some graph-theoretic algorithms (eg Floyd-Warshall)

  16. Lower bound for all “direct” linear algebra • Let M = “fast” memory size (per processor) • #words_moved(per processor) = (#flops (per processor) / M1/2) • #messages_sent(per processor) = (#flops (per processor) / M3/2) • Parallel case: assume either load or memory balanced • Holds for • Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, … • Some whole programs (sequences of these operations, no matter how individual ops are interleaved, egAk) • Dense and sparse matrices (where #flops << n3 ) • Sequential and parallel algorithms • Some graph-theoretic algorithms (eg Floyd-Warshall)

  17. TSQR: QR of a Tall, Skinnymatrix . W = = = . = = Q01 R01 Q11 R11 Q01 Q11 Q00 Q10 Q20 Q30 R00 R10 R20 R30 R00 R10 R20 R30 W0 W1 W2 W3 Q00 R00 Q10 R10 Q20 R20 Q30 R30 R01 R11 R01 R11 Q02 R02 =

  18. TSQR: QR of a Tall, Skinnymatrix . W = = = . = = Q01 Q11 Q01 R01 Q11 R11 Q00 Q10 Q20 Q30 R00 R10 R20 R30 R00 R10 R20 R30 W0 W1 W2 W3 Q00 R00 Q10 R10 Q20 R20 Q30 R30 R01 R11 R01 R11 Q02 R02 = Output = { Q00, Q10, Q20, Q30, Q01, Q11, Q02, R02 }

  19. TSQR: An Architecture-Dependent Algorithm R00 R10 R20 R30 W0 W1 W2 W3 R01 Parallel: W = R02 R11 W0 W1 W2 W3 R00 Sequential: R01 W = R02 R03 W0 W1 W2 W3 R00 R01 R01 Dual Core: W = R02 R11 R03 R11 Multicore / Multisocket / Multirack / Multisite / Out-of-core: ? Can choose reduction tree dynamically

  20. TSQR Performance Results • Parallel • Intel Clovertown • Up to 8x speedup (8 core, dual socket, 10M x 10) • Pentium III cluster, Dolphin Interconnect, MPICH • Up to 6.7x speedup (16 procs, 100K x 200) • BlueGene/L • Up to 4x speedup (32 procs, 1M x 50) • Tesla C 2050 / Fermi • Up to 13x (110,592 x 100) • Grid – 4x on 4 cities (Dongarra et al) • Cloud – early result – up and running • Sequential • “Infinite speedup” for out-of-Core on PowerPC laptop • As little as 2x slowdown vs (predicted) infinite DRAM • LAPACK with virtual memory never finished Data from Grey Ballard, Mark Hoemmen, Laura Grigori, JulienLangou, Jack Dongarra, Michael Anderson

  21. Exascale Machine ParametersSource: DOE Exascale Workshop • 2^20  1,000,000 nodes • 1024 cores/node (a billion cores!) • 100 GB/sec interconnect bandwidth • 400 GB/sec DRAM bandwidth • 1 microsec interconnect latency • 50 nanosec memory latency • 32 Petabytes of memory • 1/2 GB total L1 on a node • 20 Megawatts !?

  22. Exascale predicted speedupsfor Gaussian Elimination: CA-LU vsScaLAPACK-LU log2 (n2/p) = log2 (memory_per_proc) log2 (p)

  23. Are we using all the hardware resources? • Assume nxndense matrices on P processors • Usual approach • 1 copy of data  Memory per processor = M  n2/ P • Recall lower bounds: • #words_moved = ( (n3/ P) / M1/2 ) = ( n2 / P1/2 ) • #messages = ( (n3/ P) / M3/2 ) = ( P1/2) • Attained by 2D algorithms (many examples) • P processors connected in P1/2 x P1/2 mesh • Each processor owns, computes on a square submatrix • New approach • Use all available memory • c>1 copies of data  Memory per processor = M  c n2/ P • Lower bounds get smaller • New 2.5D algorithms can attain new lower bounds • P processors in (P/c)1/2 x (P/c)1/2 x c mesh

  24. 2D algorithms use P1/2 x P1/2 mesh and minimal memory • 2.5D algorithms use (P/c)1/2 x (P/c)1/2 x c1/2 mesh and c-fold memory • Matmul sends c1/2 times fewer words – lower bound • Matmul sends c3/2 times fewer messages – lower bound Perfect Strong Scaling 2.5D Matmul versus ScaLAPACK Critical to use all links of BG/P’s 3D torus interconnect Distinguished Paper Award, EuroPar’11

  25. Implications for Architectural Scaling • Machine parameters: •  = seconds per flop (multiply or add) •  = reciprocal bandwidth (in seconds) •  = latency (in seconds) • M = local (fast) memory size • P = number of processors • Goal: relationships among these parameters that guarantees that communication is not the bottleneck for direct linear algebra • Time =  * #flops +  * #flops/M1/2 +  * #flops/M3/2

  26. Implications for Architectural ScalingSequential Case: • Requirements so that “most” time is spent doing arithmetic on n x n dense matrices, n2 > M •  M1/2    • In other words, time to add two rows of largest locally storable square matrix exceeds reciprocal bandwidth •  M3/2    • In other words, time to multiply 2 largest locally storable square matrices exceeds latency • Applies to every level of memory hierarchy •  M1/3    for old algorithms •  M    for old algorithms • Stricter requirements on architecture for old algorithms

  27. Implications for Architectural ScalingParallel Case: • Requirements so that “most” time is spent doing arithmetic on n x n dense matrices •  (n/p1/2)    • In other words, time to add two rows of locally stored square matrix exceeds reciprocal bandwidth •  (n/p1/2)3    • In other words, time to multiply 2 locally stored square matrices exceeds latency •  M1/2    •  (n/p1/2)2    •  M3/2    • Stricter requirements on architecture for old algorithms • Looser requirements on architecture for 2.5D algorithms

  28. Summary of Direct Linear Algebra • New lower bounds, optimal algorithms, big speedups in theory and practice • Lots of other topics, open problems • Heterogeneous architectures • Extends to case where each processor and link has a different speed (SPAA’11) • Lots more dense and sparse algorithms • Some designed, a few implemented, rest to be invented • Extensions to Strassen-like algorithms • Best Paper Award, SPAA’11 • Need autotuning, synthesis

  29. Avoiding Communication in Iterative Linear Algebra • k-steps of iterative solver for sparse Ax=b or Ax=λx • Does k SpMVs with A and starting vector • Many such “Krylov Subspace Methods” • Goal: minimize communication • Assume matrix “well-partitioned” • Serial implementation • Conventional: O(k) moves of data from slow to fast memory • New: O(1) moves of data – optimal • Parallel implementation on p processors • Conventional: O(k log p) messages (k SpMV calls, dot prods) • New: O(log p) messages - optimal • Lots of speed up possible (modeled and measured) • Price: some redundant computation 31

  30. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32, k=3 • Works for any “well-partitioned” A A3·x A2·x A·x x 1 2 3 4 … … 32

  31. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x 1 2 3 4 … … 32

  32. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32,k=3 A3·x A2·x A·x x 1 2 3 4 … … 32

  33. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x 1 2 3 4 … … 32

  34. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x 1 2 3 4 … … 32

  35. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Example: A tridiagonal, n=32, k=3 A3·x A2·x A·x x 1 2 3 4 … … 32

  36. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Sequential Algorithm • Example: A tridiagonal, n=32, k=3 Step 1 A3·x A2·x A·x x 1 2 3 4 … … 32

  37. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Sequential Algorithm • Example: A tridiagonal, n=32, k=3 Step 1 Step 2 A3·x A2·x A·x x 1 2 3 4 … … 32

  38. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Sequential Algorithm • Example: A tridiagonal, n=32, k=3 Step 1 Step 2 Step 3 A3·x A2·x A·x x 1 2 3 4 … … 32

  39. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Sequential Algorithm • Example: A tridiagonal, n=32, k=3 Step 1 Step 2 Step 3 Step 4 A3·x A2·x A·x x 1 2 3 4 … … 32

  40. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Parallel Algorithm • Example: A tridiagonal, n=32, k=3 • Each processor communicates once with neighbors Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x 1 2 3 4 … … 32

  41. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] • Replace k iterations of y = Ax with [Ax, A2x, …, Akx] • Parallel Algorithm • Example: A tridiagonal, n=32, k=3 • Each processor works on (overlapping) trapezoid Proc 1 Proc 2 Proc 3 Proc 4 A3·x A2·x A·x x 1 2 3 4 … … 32

  42. Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx] Same idea works for general sparse matrices Simple block-row partitioning  (hyper)graph partitioning Top-to-bottom processing  Traveling Salesman Problem

  43. Minimizing Communication of GMRES to solve Ax=b • GMRES: find x in span{b,Ab,…,Akb} minimizing || Ax-b ||2 Standard GMRES for i=1 to k w = A · v(i-1) … SpMV MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H Communication-avoiding GMRES W = [ v, Av, A2v, … , Akv] [Q,R] = TSQR(W) … “Tall Skinny QR” build H from R solve LSQ problem with H Sequential case: #words moved decreases by a factor of k Parallel case: #messages decreases by a factor of k • Oops – W from power method, precision lost! 45

  44. “Monomial” basis [Ax,…,Akx] fails to converge Different polynomial basis [p1(A)x,…,pk(A)x] does converge 46

  45. Speed ups of GMRES on 8-core Intel Clovertown Requires Co-tuning Kernels [MHDY09] 47

  46. CA-BiCGStab

  47. Tuning space for Krylov Methods • Classifications of sparse operators for avoiding communication • Explicit indices or nonzero entries cause most communication, along with vectors • Ex: With stencils (all implicit) all communication for vectors Indices Nonzero entries • Operations • [x, Ax, A2x,…, Akx ] or [x, p1(A)x, p2(A)x, …, pk(A)x ] • Number of columns in x • [x, Ax, A2x,…, Akx ] and [y, ATy, (AT)2y,…, (AT)ky ], or [y, ATAy, (ATA)2y,…, (ATA)ky ], • return all vectors or just last one • Cotuning and/or interleaving • W = [x, Ax, A2x,…, Akx ] and {TSQR(W) or WTW or … } • Ditto, but throw away W • Preconditioned versions

  48. Summary of Iterative Linear Algebra • New Lower bounds, optimal algorithms, big speedups in theory and practice • Lots of other progress, open problems • Many different algorithms reorganized • More underway, more to be done • Architectural scaling rules (as for direct case) • Sparse matrices  stricter conditions for scaling • Need to recognize stable variants more easily • Preconditioning • Autotuning and synthesis • Different kinds of “sparse matrices”

  49. For further information • www.cs.berkeley.edu/~demmel • Papers • bebop.cs.berkeley.edu • www.netlib.org/lapack/lawns • 1-week-short course – slides and video • www.ba.cnr.it/ISSNLA2010 • Google “parallel computing course”

  50. Summary Time to redesign all linear algebra algorithms and software And eventually the rest of the 13 motifs Don’t Communic…

More Related