1 / 88

Parallel Graph Algorithms

This lecture outline explores applications of parallel graph algorithms, including graph traversal-based problems and case studies. It covers topics such as breadth-first search, single-source shortest paths, betweenness centrality, and community identification.

mathey
Télécharger la présentation

Parallel Graph Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Graph Algorithms KameshMadduri KMadduri@lbl.gov Lawrence Berkeley National Laboratory CS267/EngC233 Spring 2010 April 15, 2010

  2. Lecture Outline • Applications • Review of key results • Case studies: Graph traversal-based problems, parallel algorithms • Breadth-First Search • Single-source Shortest paths • Betweenness Centrality • Community Identification

  3. Routing in transportation networks Road networks, Point-to-point shortest paths: 15 seconds (naïve)  10 microseconds H. Bast et al., “Fast Routing in Road Networks with Transit Nodes”,Science 27, 2007.

  4. Internet and the WWW • The world-wide web can be represented as a directed graph • Web search and crawl: traversal • Link analysis, ranking: Page rank and HITS • Document classification and clustering • Internet topologies (router networks) are naturally modeled as graphs

  5. Scientific Computing • Reorderings for sparse solvers • Fill reducing orderings • Partitioning, eigenvectors • Heavy diagonal to reduce pivoting (matching) • Data structures for efficient exploitation of sparsity • Derivative computations for optimization • Matroids, graph colorings, spanning trees • Preconditioning • Incomplete Factorizations • Partitioning for domain decomposition • Graph techniques in algebraic multigrid • Independent sets, matchings, etc. • Support Theory • Spanning trees & graph embedding techniques Image source: Yifan Hu, “A gallery of large graphs” Image source: Tim Davis, UF Sparse Matrix Collection. B. Hendrickson, “Graphs and HPC: Lessons for Future Architectures”,http://www.er.doe.gov/ascr/ascac/Meetings/Oct08/Hendrickson%20ASCAC.pdf

  6. Large-scale data analysis • Graph abstractions are very useful to analyze complex data sets. • Sources of data: petascale simulations, experimental devices, the Internet, sensor networks • Challenges: data size, heterogeneity, uncertainty, data quality Social Informatics:new analytics challenges, data uncertainty Astrophysics: massive datasets, temporal variations Bioinformatics: data quality, heterogeneity Image sources: (1) http://physics.nmt.edu/images/astro/hst_starfield.jpg (2,3) www.visualComplexity.com

  7. Data Analysis and Graph Algorithms in Systems Biology • Study of the interactions between various components in a biological system • Graph-theoretic formulations are pervasive: • Predicting new interactions: modeling • Functional annotation of novel proteins: matching, clustering • Identifying metabolic pathways: paths, clustering • Identifying new protein complexes: clustering, centrality Image Source: Giot et al., “A Protein Interaction Map of Drosophila melanogaster”, Science 302, 1722-1736, 2003.

  8. Graph –theoretic problems in social networks • Community identification: clustering • Targeted advertising: centrality • Information spreading: modeling Image Source: Nexus (Facebook application)

  9. Network Analysis for Intelligence and Survelliance • [Krebs ’04] Post 9/11 Terrorist Network Analysis from public domain information • Plot masterminds correctly identified from interaction patterns: centrality • A global view of entities is often more insightful • Detect anomalous activities by exact/approximate subgraph isomorphism. Image Source: http://www.orgnet.com/hijackers.html Image Source: T. Coffman, S. Greenblatt, S. Marcus, Graph-based technologies for intelligence analysis, CACM, 47 (3, March 2004): pp 45-47

  10. Research in Parallel Graph Algorithms Graph Algorithms Application Areas Methods/ Problems Architectures GPUs FPGAs x86 multicore servers Massively multithreaded architectures Multicore Clusters Clouds Traversal Shortest Paths Connectivity Max Flow … … … Social Network Analysis WWW Computational Biology Scientific Computing Engineering Find central entities Community detection Network dynamics Data size Marketing Social Search Gene regulation Metabolic pathways Genomics Problem Complexity Graph partitioning Matching Coloring VLSI CAD Route planning

  11. Characterizing Graph-theoretic computations (2/2) Factors that influence choice of algorithm Input data Graph kernel • graph sparsity (m/n ratio) • static/dynamic nature • weighted/unweighted, weight distribution • vertex degree distribution • directed/undirected • simple/multi/hyper graph • problem size • granularity of computation at nodes/edges • domain-specific characteristics • traversal • shortest path algorithms • flow algorithms • spanning tree algorithms • topological sort • ….. Problem: Find *** • paths • clusters • partitions • matchings • patterns • orderings Graph problems are often recast as sparse linear algebra (e.g., partitioning) or linear programming (e.g., matching) computations

  12. Lecture Outline • Applications • Review of key results • Graph traversal-based parallel algorithms, case studies • Breadth-First Search • Single-source Shortest paths • Betweenness Centrality • Community Identification

  13. History • 1735: “Seven Bridges of Königsberg” problem, resolved by Euler, first result in graph theory. • … • 1966: Flynn’s Taxonomy. • 1968: Batcher’s “sorting networks” • 1969: Harary’s “Graph Theory” • … • 1972: Tarjan’s “Depth-first search and linear graph algorithms” • 1975: Reghbati and Corneil, Parallel Connected Components • 1982: Misra and Chandy, distributed graph algorithms. • 1984: Quinn and Deo’s survey paper on “parallel graph algorithms” • …

  14. The PRAM model • Idealized parallel shared memory system model • Unbounded number of synchronous processors; no synchronization, communication cost; no parallel overhead • EREW, CREW • Measuring performance: space and time complexity; total number of operations (work)

  15. The Helman-JaJa model • Extension to the PRAM model for shared memory algorithm design and analysis. • T(n, p) is measured by the triplet –TM(n, p), TC(n, p), B(n, p) • TM(n, p): maximum number of non-contiguous main memory accesses required by any processor • TC(n, p): upper bound on the maximum local computational complexity of any of the processors • B(n, p): number of barrier synchronizations.

  16. PRAM Pros and Cons • Pros • Simple and clean semantics. • The majority of theoretical parallel algorithms are specified with the PRAM model. • Independent of the communication network topology. • Cons • Not realistic, too powerful communication model. • Algorithm designer is misled to use IPC without hesitation. • Synchronized processors. • No local memory. • Big-O notation is often misleading.

  17. Building blocks of classical PRAM graph algorithms • Prefix sums • List ranking • Euler tours, Pointer jumping, Symmetry breaking • Vertex collapse • Tree contraction

  18. Prefix Sums • Input: A, an array of n elements; associative binary operation • Output: • O(n) work, O(log n) time, • n processors B(3,1) C(3,2) B(2,2) C(2,2) B(2,1) C(2,1) B(1,3) C(1,3) B(1,4) C(1,4) B(1,2) C(1,2) B(1,1) C(1,1) B(0,1) C(0,1) B(0,2) C(0,2) B(0,3) C(0,3) B(0,4) C(0,4) B(0,5) C(0,5) B(0,6) C(0,6) B(0,7) C(0,7) B(0,8) C(0,8)

  19. Parallel Prefix • X: array of n elements stored in arbitrary order. • For each element i, let X(i).value be its value and X(i).next be the index of its successor. • For binary associative operator Θ, compute X(i).prefix such that • X(head).prefix = X (head).value, and • X(i).prefix = X(i).value Θ X(predecessor).prefix where • head is the first element • i is not equal to head, and • predecessor is the node preceding i. • List ranking: special case of parallel prefix, values initially set to 1, and addition is the associative operator.

  20. List ranking Illustration • Ordered list (X.next values) • Random list (X.next values) 8 9 5 6 7 2 3 4 2 9 7 8 3 4 6 5

  21. List Ranking key idea 1. Chop X randomly into s pieces 2. Traverse each piece using a serial algorithm. 3. Compute the global rank of each element using the result computed from the second step. • In the Helman-JaJa model, TM(n,p) = O(n/p).

  22. Connected Components • Building block for many graph algorithms • Minimum spanning tree, biconnected components, planarity testing, etc. • Representative of the “graft-and-shortcut” approach • CRCW PRAM algorithms • [Shiloach & Vishkin ’82]: O(log n) time, O((m+n) logn) work • [Gazit ’91]: randomized, optimal, O(log n) time. • CREW algorithms – [Han & Wagner ’90]: O(log2n) time, O((m+nlog n) logn) work.

  23. Shiloach-Vishkin algorithm • Input: n isolated vertices and m PRAM processors. • Each processor Pigrafts a tree rooted at vertex vito the tree that contains one of its neighbors u under the constraints u< vi • Grafting creates k ≥ 1 connected subgraphs, and each subgraph is then shortcut so that the depth of the trees reduce at least by half. • Repeat graft and shortcut until no more grafting is possible. • Runs on arbitrary CRCW PRAM in O(log n) time with O(m) processors. • Helman-JaJa model: TM = (3m/p + 2)log n, TB = 4log n. shortcut graft 2 2 4 4 2,3 1,4 3 3 1 1

  24. An example higher-level algorithm Typically composed of several low-level efficient algorithms. Tarjan-Vishkin’sbiconnected components algorithm: O(log n) time, O(m+n) time. • Compute spanning tree T for the input graph G. • Compute Eulerian circuit for T. • Root the tree at an arbitrary vertex. • Preorder numbering of all the vertices. • Label edges using vertex numbering. • Connected components using the Shiloach-Vishkinalgorithm.

  25. Data structures: graph representation Static case • Dense graphs (m = O(n2)): adjacency matrix commonly used. • Sparse graphs: adjacency lists Dynamic • representation depends on common-case query • Edge insertions or deletions? Vertex insertions or deletions? Edge weight updates? • Graph update rate • Queries: connectivity, paths, flow, etc. • Optimizing for locality a key design consideration.

  26. Data structures in (parallel) graph algorithms • A wide range seen in graph algorithms: array, list, queue, stack, set, multiset, tree • Implementations are typically array-based for performance considerations. • Key data structure considerations in parallel graph algorithm design • Practical parallel priority queues • Space-efficiency • Parallel set/multiset operations, e.g., union, intersection, etc.

  27. Lecture Outline • Applications • Review of key results • Case studies: Graph traversal-based problems, parallel algorithms • Breadth-First Search • Single-source Shortest paths • Betweenness Centrality • Community Identification

  28. Graph Algorithms on today’s systems • Concurrency • Simulating PRAM algorithms: hardware limits of memory bandwidth, number of outstanding memory references; synchronization • Locality • Work-efficiency • Try to improve cache locality, but avoid “too much” superfluous computation

  29. The locality challenge“Large memory footprint, low spatial and temporal locality impede performance” Serial Performance of “approximate betweenness centrality” on a 2.67 GHz Intel Xeon 5560 (12 GB RAM, 8MB L3 cache) Input: Synthetic R-MAT graphs (# of edges m = 8n) No Last-level Cache (LLC) misses ~ 5X drop in performance O(m) LLC misses

  30. The parallel scaling challenge “Classical parallel graph algorithms perform poorly on current parallel systems” • Graph topology assumptions in classical algorithms do not match real-world datasets • Parallelization strategies at loggerheads with techniques for enhancing memory locality • Classical “work-efficient” graph algorithms may not fully exploit new architectural features • Increasing complexity of memory hierarchy (x86), DMA support (Cell), wide SIMD, floating point-centric cores (GPUs). • Tuning implementation to minimize parallel overhead is non-trivial • Shared memory: minimizing overhead of locks, barriers. • Distributed memory: bounding message buffer sizes, bundling messages, overlapping communication w/ computation.

  31. Optimizing BFS on cache-based multicore platforms, for networks with “power-law” degree distributions Test data (Data: Mislove et al., IMC 2007.) Synthetic R-MAT networks

  32. Graph traversal (BFS) problem definition 1 2 distance from source vertex Input: Output: 5 8 4 1 3 1 3 4 2 0 7 3 4 6 9 source vertex 1 2 • Memory requirements (# of machine words): • Sparse graph representation: m+n • Stack of visited vertices: n • Distance array: n

  33. Parallel BFS Strategies 5 8 1. Expand current frontier (level-synchronous approach, suited for low diameter graphs) 1 • O(D) parallel steps • Adjacencies of all vertices • in current frontier are • visited in parallel 0 7 3 4 6 9 source vertex 2 2. Stitch multiple concurrent traversals (Ullman-Yannakakis approach, suited for high-diameter graphs) 5 8 • path-limited searches from “super vertices” • APSP between “super vertices” 1 source vertex 0 7 3 4 6 9 2

  34. A deeper dive into the “level synchronous” strategy Locality (where are the random accesses originating from?) 1. Ordering of vertices in the “current frontier” array, i.e., accesses to adjacency indexing array, cumulative accesses O(n). 2. Ordering of adjacency list of each vertex, cumulative O(m). 3. Sifting through adjacencies to check whether visited or not, cumulative accesses O(m). 84 53 93 31 0 44 74 63 26 11 1. Access Pattern: idx array -- 53, 31, 74, 26 2,3. Access Pattern: d array -- 0, 84, 0, 84, 93, 44, 63, 0, 0, 11

  35. Performance Observations Flickr social network Youtube social network Edge filtering Graph expansion

  36. Improving locality: Vertex relabeling x x x x x x x x x x x x x • Well-studied problem, slight differences in problem formulations • Linear algebra: sparse matrix column reordering to reduce bandwidth, reveal dense blocks. • Databases/data mining: reordering bitmap indices for better compression; permuting vertices of WWW snapshots, online social networks for compression • NP-hard problem, several known heuristics • We require fast, linear-work approaches • Existing ones: BFS or DFS-based, Cuthill-McKee, Reverse Cuthill-McKee, exploit overlap in adjacency lists, dimensionality reduction x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

  37. Improving locality: Optimizations • Recall: Potential O(m) non-contiguous memory references in edge traversal (to check if vertex is visited). • e.g., access order: 53, 31, 31, 26, 74, 84, 0, … • Objective: Reduce TLB misses, private cache misses, exploit shared cache. • Optimizations: • Sort the adjacency lists of each vertex – helps order memory accesses, reduce TLB misses. • Permute vertex labels – enhance spatial locality. • Cache-blocked edge visits – exploit temporal locality. 84 53 93 31 0 44 74 63 11 26

  38. Improving locality: Cache blocking Metadata denoting blocking pattern Process high-degree vertices separately Adjacencies (d) Adjacencies (d) • Instead of processing adjacencies of each vertex serially, exploit sorted adjacency list structure w/ blocked accesses • Requires multiple passes through the frontier array, tuning for optimal block size. • Note: frontier array size may be O(n) 3 2 1 x x x x x x x x x x x x x x x x x x x x x x x x frontier frontier x x Tune to L2 cache size x x x x x x x x x x x x New: cache-blocked approach linear processing

  39. Vertex relabeling heuristic Similar to older heuristics, but tuned for small-world networks: • High percentage of vertices with (out) degrees 0, 1, and 2 in social and information networks => store adjacencies explicitly (in indexing data structure). • Augment the adjacency indexing data structure (with two additional words) and frontier array (with one bit) • Process “high-degree vertices” adjacencies in linear order, but other vertices with d-array cache blocking. • Form dense blocks around high-degree vertices • Reverse Cuthill-McKee, removing degree 1 and degree 2 vertices

  40. Architecture-specific Optimizations 1. Software prefetchingon the Intel Core i7 (supports 32 loads and 20 stores in flight) • Speculative loads of index array and adjacencies of frontier vertices will reduce compulsory cache misses. 2. Aligning adjacency lists to optimize memory accesses • 16-byte aligned loads and stores are faster. • Alignment helps reduce cache misses due to fragmentation • 16-byte aligned non-temporal stores (during creation of new frontier) are fast. 3. SIMD SSE integer intrinsicsto process “high-degree vertex” adjacencies. 4. Fast atomics (BFS is lock-free w/ low contention, and CAS-based intrinsicshave very low overhead) 5. Hugepagesupport (significant TLB miss reduction) 6. NUMA-aware memory allocation exploiting first-touch policy

  41. Experimental Setup • Intel Xeon 5560 (Core i7, “Nehalem”) • 2 sockets x 4 cores x 2-way SMT • 12 GB DRAM, 8 MB shared L3 • 51.2 GBytes/sec peak bandwidth • 2.66 GHz proc. Performance averaged over 10 different source vertices, 3 runs each.

  42. Impact of optimization strategies * Optimization speedup (performance on 4 cores) w.r.t baseline parallel approach, on a synthetic R-MAT graph (n=223,m=226)

  43. Cache locality improvement Performance count: # of non-contiguous memory accesses (assuming cache line size of 16 words) Theoretical count of the number of non-contiguous memory accesses: m+3n

  44. Parallel performance (Orkut graph) Execution time: 0.28 seconds (8 threads) Parallel speedup: 4.9 Speedup over baseline: 2.9 Graph: 3.07 million vertices, 220 million edges Single socket of Intel Xeon 5560 (Core i7)

  45. Lecture Outline • Applications • Review of key results • Case studies: Graph traversal-based problems, parallel algorithms • Breadth-First Search • Single-source Shortest paths • Betweenness Centrality • Community Identification

  46. Parallel Single-source Shortest Paths (SSSP) algorithms • Edge weights: concurrency primary challenge! • No known PRAM algorithm that runs in sub-linear time and O(m+nlog n) work • Parallel priority queues: relaxed heaps [DGST88], [BTZ98] • Ullman-Yannakakis randomized approach [UY90] • Meyer and Sanders, ∆ - stepping algorithm [MS03] • Distributed memory implementations based on graph partitioning • Heuristics for load balancing and termination detection K. Madduri, D.A. Bader, J.W. Berry, and J.R. Crobak, “An Experimental Study of A Parallel Shortest Path Algorithm for Solving Large-Scale Graph Instances,” Workshop on Algorithm Engineering and Experiments (ALENEX), New Orleans, LA, January 6, 2007.

  47. ∆ - stepping algorithm [MS03] • Label-correcting algorithm: Can relax edges from unsettled vertices also • ∆ - stepping: “approximate bucket implementation of Dijkstra’s algorithm” • ∆: bucket width • Vertices are ordered using buckets representing priority range of size ∆ • Each bucket may be processed in parallel

  48. ∆ - stepping algorithm: illustration ∆ = 0.1 (say) 0.05 • One parallel phase • while (bucket is non-empty) • Inspect light edges • Construct a set of “requests” (R) • Clear the current bucket • Remember deleted vertices (S) • Relax request pairs in R • Relax heavy request pairs (from S) • Go on to the next bucket 0.56 3 6 0.07 0.01 0.15 0.23 4 2 0 0.02 0.13 0.18 5 1 d array 0 1 2 3 4 5 6 ∞ ∞ ∞ ∞ ∞ ∞ ∞ Buckets

  49. ∆ - stepping algorithm: illustration 0.05 • One parallel phase • while (bucket is non-empty) • Inspect light edges • Construct a set of “requests” (R) • Clear the current bucket • Remember deleted vertices (S) • Relax request pairs in R • Relax heavy request pairs (from S) • Go on to the next bucket 0.56 3 6 0.07 0.01 0.15 0.23 4 2 0 0.02 0.13 0.18 5 1 d array 0 1 2 3 4 5 6 ∞ ∞ ∞ ∞ ∞ ∞ 0 Buckets Initialization: Insert s into bucket, d(s) = 0 0 0

  50. ∆ - stepping algorithm: illustration • One parallel phase • while (bucket is non-empty) • Inspect light edges • Construct a set of “requests” (R) • Clear the current bucket • Remember deleted vertices (S) • Relax request pairs in R • Relax heavy request pairs (from S) • Go on to the next bucket 0.05 0.56 3 6 0.07 0.01 0.15 0.23 4 2 0 0.02 0.13 0.18 5 1 d array 0 1 2 3 4 5 6 ∞ ∞ ∞ ∞ ∞ ∞ 0 R 2 Buckets 0 0 .01 S

More Related