1 / 28

Computation on meshes, sparse matrices, and graphs

Computation on meshes, sparse matrices, and graphs. Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267. Parallelizing Stencil Computations. Parallelism is simple Grid is a regular data structure

Télécharger la présentation

Computation on meshes, sparse matrices, and graphs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computation onmeshes, sparse matrices, and graphs Some slides are from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267

  2. Parallelizing Stencil Computations • Parallelism is simple • Grid is a regular data structure • Even decomposition across processors gives load balance • Spatial locality limits communication cost • Communicate only boundary values from neighboring patches • Communication volume • v = total # of boundary cells between patches

  3. Two-dimensional block decomposition • n mesh cells, p processors • Each processor has a patch of n/p cells • Block row (or block col) layout: v = 2 * p * sqrt(n) • 2-dimensional block layout: v = 4 * sqrt(p) * sqrt(n)

  4. Ghost Nodes in Stencil Computations Comm cost = α * (#messages) + β * (total size of messages) • Keep a ghost copy of neighbors’ boundary nodes • Communicate every second iteration, not every iteration • Reduces #messages, not total size of messages • Costs extra memory and computation • Can also use more than one layer of ghost nodes Green = my interior nodes Blue = my boundary nodes Yellow = neighbors’ boundary nodes = my “ghost nodes”

  5. Parallelism in Regular meshes • Computing a Stencil on a regular mesh • need to communicate mesh points near boundary to neighboring processors. • Often done with ghost regions • Surface-to-volume ratio keeps communication down, but • Still may be problematic in practice Implemented using “ghost” regions. Adds memory overhead

  6. Irregular mesh: NASA Airfoil in 2D

  7. Composite Mesh from a Mechanical Structure

  8. Adaptive Mesh Refinement (AMR) • Adaptive mesh around an explosion • Refinement done by calculating errors

  9. Adaptive Mesh fluid density Shock waves in a gas dynamics using AMR (Adaptive Mesh Refinement) See: http://www.llnl.gov/CASC/SAMRAI/

  10. Irregular mesh: Tapered Tube (Multigrid)

  11. Challenges of Irregular Meshes for PDE’s • How to generate them in the first place • For example, Triangle, a 2D mesh generator (Shewchuk) • 3D mesh generation is harder! For example, QMD (Vavasis) • How to partition them into patches • For example, ParMetis, a parallel graph partitioner (Karypis) • How to design iterative solvers • For example, PETSc, Aztec, Hypre (all from national labs) • … Prometheus, a multigrid solver for finite elements on irregular meshes • How to design direct solvers • For example, SuperLU, parallel sparse Gaussian elimination • These are challenges to do sequentially, more so in parallel

  12. Converting the Mesh to a Matrix

  13. Sparse matrix data structure (stored by rows) • Full: • 2-dimensional array of real or complex numbers • (nrows*ncols) memory • Sparse: • compressed row storage • about (2*nzs + nrows) memory

  14. Distributed row sparse matrix data structure P0 P1 • Each processor stores: • # of local nonzeros • range of local rows • nonzeros in CSR form P2 Pn

  15. Conjugate gradient iteration to solve A*x=b x0 = 0, r0 = b, d0 = r0 for k = 1, 2, 3, . . . αk = (rTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk-1 approx solution rk = rk-1 – αk Adk-1 residual βk = (rTk rk) / (rTk-1rk-1) improvement dk = rk + βk dk-1 search direction • One matrix-vector multiplication per iteration • Two vector dot products per iteration • Four n-vectors of working storage

  16. Parallel Vector Operations • Suppose x, y, and z are column vectors • Data is partitioned among all processors • Vector add: z = x + y • Embarrassingly parallel • Scaled vector add (DAXPY): z = α*x + y • Broadcast the scalar α, then independent * and + • Vector dot product (DDOT): β = xTy = Sj x[j] * y[j] • Independent *, then + reduction

  17. Broadcast and reduction • Broadcast of 1 value to p processors in log p time • Reduction of p values to 1 in log p time • Takes advantage of associativity in +, *, min, max, etc. α Broadcast 1 3 1 0 4 -6 3 2 Add-reduction 8

  18. Parallel Dense Matrix-Vector Product (Review) P0 P1 P2 P3 • y = A*x, where A is a dense matrix • Layout: • 1D by rows • Algorithm: Foreach processor j Broadcast X(j) Compute A(p)*x(j) • A(i) is the n by n/p block row that processor Pi owns • Algorithm uses the formula Y(i) = A(i)*X = Sj A(i)*X(j) x P0 P1 P2 P3 y

  19. P0 P1 P2 P3 x P0 P1 P2 P3 y Parallel sparse matrix-vector product • Lay out matrix and vectors by rows • y(i) = sum(A(i,j)*x(j)) • Only compute terms with A(i,j) ≠ 0 • Algorithm Each processor i: Broadcast x(i) Compute y(i) = A(i,:)*x • Optimizations • Only send each proc the parts of x it needs, to reduce comm • Reorder matrix for better locality by graph partitioning • Worry about balancing number of nonzeros / processor, if rows have very different nonzero counts

  20. Other memory layouts for matrix-vector product • Column layout of the matrix eliminates the broadcast • But adds a reduction to update the destination – same total comm • Blocked layout uses a broadcast and reduction, both on only sqrt(p) processors – less total comm • Blocked layout has advantages in multicore / Cilk++ too P0 P1 P2 P3 P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15

  21. Graphs and Sparse Matrices • Sparse matrix is a representation of a (sparse) graph 1 2 3 4 5 6 1 1 1 2 1 1 1 3 1 11 4 1 1 5 1 1 6 1 1 3 2 4 1 5 6 • Matrix entries are edge weights • Number of nonzeros per row is the vertex degree • Edges represent data dependencies in matrix-vector multiplication

  22. edge crossings = 6 edge crossings = 10 Graph partitioning • Assigns subgraphs to processors • Determines parallelism and locality. • Tries to make subgraphs all same size (load balance) • Tries to minimize edge crossings (communication). • Exact minimization is NP-complete.

  23. 1 2 3 4 5 6 7 1 2 2 1 3 4 5 4 5 7 6 7 6 3 Link analysis of the web • Web page = vertex • Link = directed edge • Link matrix: Aij = 1 if page i links to page j

  24. Web graph: PageRank (Google) [Brin, Page] • Markov process: follow a random link most of the time; otherwise, go to any page at random. • Importance = stationary distribution of Markov process. • Transition matrix is p*A + (1-p)*ones(size(A)), scaled so each column sums to 1. • Importance of page i is the i-th entry in the principal eigenvector of the transition matrix. • But the matrix is 1,000,000,000,000 by 1,000,000,000,000. An important page is one that many important pages point to.

  25. A Page Rank Matrix • Importance ranking of web pages • Stationary distribution of a Markov chain • Power method: matvec and vector arithmetic • Matlab*P page ranking demo (from SC’03) on a web crawl of mit.edu (170,000 pages)

  26. Social Network Analysis in Matlab: 1993 Co-author graph from 1993 Householdersymposium

  27. Social Network Analysis in Matlab: 1993 Which author hasthe most collaborators? >>[count,author] = max(sum(A)) count = 32 author = 1 >>name(author,:) ans = Golub Sparse Adjacency Matrix

  28. Social Network Analysis in Matlab: 1993 Have Gene Golub and Cleve Moler ever been coauthors? >> A(Golub,Moler) ans = 0 No. But how many coauthors do they have in common? >> AA = A^2; >> AA(Golub,Moler) ans = 2 And who are those common coauthors? >> name( find ( A(:,Golub) .* A(:,Moler) ), :) ans = Wilkinson VanLoan

More Related