1 / 81

Joseph Gonzalez

Joseph Gonzalez. PowerGraph. Distributed Graph-Parallel Computation on Natural Graphs. The Team :. Yucheng Low. Aapo Kyrola. Danny Bickson. Haijie Gu. Carlos Guestrin. Alex Smola. Guy Blelloch. BigGraphs are ubiquitous. Social Media. Web. Advertising. Science.

varuna
Télécharger la présentation

Joseph Gonzalez

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Joseph Gonzalez PowerGraph Distributed Graph-Parallel Computation on Natural Graphs The Team: Yucheng Low Aapo Kyrola Danny Bickson Haijie Gu Carlos Guestrin Alex Smola Guy Blelloch

  2. BigGraphs are ubiquitous..

  3. Social Media Web Advertising Science • Graphsencoderelationships between: • Big: billions of vertices and edgesand rich metadata People Products Ideas Facts Interests

  4. Graphs are Essential to Data-Mining and Machine Learning • Identify influential people and information • Find communities • Target ads and products • Model complex data dependencies

  5. NaturalGraphsGraphs derived from natural phenomena

  6. Problem: Existing distributed graph computation systems perform poorly on NaturalGraphs.

  7. PageRank on Twitter Follower Graph Natural Graph with 40M Users, 1.4 Billion Links Order of magnitude by exploiting properties of Natural Graphs PowerGraph Hadoop results from [Kang et al. '11] Twister (in-memory MapReduce) [Ekanayake et al. ‘10]

  8. Properties of Natural Graphs Regular Mesh Natural Graph Power-Law Degree Distribution

  9. Power-Law Degree Distribution High-Degree Vertices Top 1% of vertices are adjacent to 50% of the edges! -Slope = α≈ 2 Number of Vertices More than 108 vertices have one neighbor. AltaVista WebGraph 1.4B Vertices, 6.6B Edges Degree

  10. Power-Law Degree Distribution “Star Like” Motif President Obama Followers

  11. Power-Law Graphs are Difficult to Partition • Power-Law graphs do not have low-cost balanced cuts [Leskovec et al. 08, Lang 04] • Traditional graph-partitioning algorithms perform poorly on Power-Law Graphs.[Abou-Rjeili et al. 06] CPU 1 CPU 2

  12. Properties of Natural Graphs High-degree Vertices Power-Law Degree Distribution Low Quality Partition

  13. PowerGraph Program For This • Split High-Degree vertices • New AbstractionEquivalence on Split Vertices Run on This Machine 1 Machine 2

  14. How do we programgraph computation? “Think like a Vertex.” -Malewicz et al. [SIGMOD’10]

  15. The Graph-Parallel Abstraction • A user-defined Vertex-Programruns on each vertex • Graph constrains interaction along edges • Using messages (e.g. Pregel[PODC’09, SIGMOD’10]) • Through shared state (e.g., GraphLab[UAI’10, VLDB’12]) • Parallelism: run multiple vertex programs simultaneously

  16. Data-Parallel vs. Graph-Parallel Tasks Data-Parallel Graph-Parallel Map-Reduce GraphLab & Pregel PageRank Community Detection Shortest-Path Interest Prediction Feature Extraction Summary Statistics Graph Construction

  17. Example Depends on the popularity their followers Depends on popularityof her followers What’s the popularity of this user? Popular?

  18. PageRank Algorithm • Update ranks in parallel • Iterate until convergence Rank of user i Weighted sum of neighbors’ ranks

  19. Graph-parallel Abstractions Pregel Shared State [UAI’10, VLDB’12] Messaging [PODC’09, SIGMOD’10] Many Others Giraph, Golden Orbs, Stanford GPS, Dryad, BoostPGL, Signal-Collect, …

  20. The Pregel Abstraction Vertex-Programs interact by sending messages. Pregel_PageRank(i, messages) : // Receive all the messages total = 0 foreach( msg in messages) : total = total + msg // Update the rank of this vertex R[i] = 0.15 + total // Send new messages to neighbors foreach(j in out_neighbors[i]) : Send msg(R[i] * wij) to vertex j i Malewiczet al. [PODC’09, SIGMOD’10]

  21. The Pregel Abstraction Compute Communicate Barrier

  22. The GraphLab Abstraction Vertex-Programs directly read the neighbors state GraphLab_PageRank(i) // Compute sum over neighbors total = 0 foreach( j inin_neighbors(i)): total = total + R[j] * wji // Update the PageRank R[i] = 0.15 + total // Trigger neighbors to run again if R[i] not converged then foreach( j inout_neighbors(i)): signal vertex-program on j i • Low et al. [UAI’10, VLDB’12]

  23. GraphLab Execution The scheduler determines the order that vertices are executed b d a c CPU 1 c b e f g Scheduler e f b a i k h j i h i j CPU 2 The process repeats until the scheduler is empty

  24. Preventing Overlapping Computation • GraphLab ensures serializable executions Conflict Edge Conflict Edge

  25. Serializable Execution For each parallel execution, there exists a sequential execution of vertex-programs which produces the same result. time CPU 1 Parallel CPU 2 Single CPU Sequential

  26. Graph Computation: Synchronous v. Asynchronous

  27. Analyzing Belief Propagation [Gonzalez, Low, G. ‘09] focus here A B Priority Queue Smart Scheduling importantinfluence Asynchronous Parallel Model (rather than BSP) fundamental for efficiency

  28. Asynchronous Belief Propagation Challenge = Boundaries Many Updates Synthetic Noisy Image Few Updates Cumulative Vertex Updates Algorithm identifies and focuses on hidden sequential structure Graphical Model

  29. GraphLab vs. Pregel (BSP) • Multicore PageRank (25M Vertices, 355M Edges) 51% updated only once

  30. Better for ML Graph-parallel Abstractions Pregel Messaging Shared State i i Synchronous Asynchronous

  31. Challenges of High-Degree Vertices Sequentially processedges Sends many messages (Pregel) Touches a large fraction of graph (GraphLab) Edge meta-datatoo large for singlemachine Asynchronous Executionrequires heavy locking (GraphLab) Synchronous Executionprone to stragglers (Pregel)

  32. Communication Overhead for High-Degree Vertices Fan-In vs. Fan-Out

  33. PregelMessage Combiners on Fan-In • User defined commutativeassociative (+) message operation: Machine 1 Machine 2 A B D + C Sum

  34. Pregel Struggles with Fan-Out • Broadcast sends many copies of the same message to the same machine! Machine 1 Machine 2 A B D C

  35. Fan-In and Fan-Out Performance • PageRank on synthetic Power-Law Graphs • Piccolo was used to simulate Pregel with combiners High Fan-Out Graphs High Fan-In Graphs More high-degree vertices

  36. GraphLab Ghosting Machine 1 Machine 2 • Changes to master are synced to ghosts A A B D D B C C Ghost

  37. GraphLab Ghosting Machine 1 Machine 2 • Changes to neighbors of high degree vertices creates substantial network traffic A A B D D B Ghost C C

  38. Fan-In and Fan-Out Performance • PageRank on synthetic Power-Law Graphs • GraphLab is undirected Pregel Fan-Out GraphLab Fan-In/Out Pregel Fan-In More high-degree vertices

  39. Graph Partitioning • Graph parallel abstractions rely on partitioning: • Minimize communication • Balance computation and storage Y Machine 1 Machine 2 Datatransmitted across network O(# cut edges)

  40. Random Partitioning • Both GraphLab and Pregel resort to random (hashed) partitioning onnatural graphs Machine 1 Machine 2 10 Machines  90% of edges cut 100 Machines  99% of edges cut!

  41. In Summary GraphLab and Pregel are not well suited for natural graphs • Challenges of high-degree vertices • Low quality partitioning

  42. PowerGraph • GAS Decomposition: distribute vertex-programs • Move computation to data • Parallelize high-degree vertices • Vertex Partitioning: • Effectively distribute large power-law graphs

  43. A Common Pattern forVertex-Programs GraphLab_PageRank(i) // Compute sum over neighbors total = 0 foreach( j inin_neighbors(i)): total = total + R[j] * wji // Update the PageRank R[i] = 0.1 + total // Trigger neighbors to run again if R[i] not converged then foreach( j inout_neighbors(i)) signal vertex-program on j Gather InformationAbout Neighborhood Update Vertex Signal Neighbors & Modify Edge Data

  44. GAS Decomposition Apply Gather (Reduce) Scatter Accumulate information about neighborhood Apply the accumulated value to center vertex Update adjacent edgesand vertices. User Defined: User Defined: User Defined: Apply( , Σ)  Gather( )  Σ Scatter( )  Y’ Y Y’ Y’ Σ Y’ Y Y Y Y Σ1+Σ2 Σ3 Update Edge Data & Activate Neighbors ParallelSum + + … +  Y Y

  45. PageRank in PowerGraph PowerGraph_PageRank(i) Gather( j  i ) : return wji* R[j] sum(a, b) : return a + b; Apply(i, Σ) : R[i] = 0.15 + Σ Scatter( i j ) :if R[i]changed then trigger jto be recomputed

  46. Distributed Execution of a PowerGraph Vertex-Program Mirror Mirror Master Mirror Machine 1 Machine 2 Gather Y’ Y’ Y Y Y’ Y’ Y Σ Σ1 Σ2 Y + + + Apply Machine 3 Machine 4 Σ3 Σ4 Scatter

  47. Dynamically Maintain Cache • Repeated calls to gather wastes computation: • Solution: Cache previous gather and update incrementally Y Δ Old Value New Value Y Y Y Y Y Y Y Y Y Wasted computation + + … + +  Σ’ Cached Gather (Σ) + +…+ + Δ Σ’

  48. PageRank in PowerGraph PowerGraph_PageRank(i) Gather( j  i ) : return wji* R[j] sum(a, b) : return a + b; Apply(i, Σ) : R[i] = 0.15 + Σ Scatter( i j ) :if R[i]changed then trigger jto be recomputed Post Δj = wij* (R[i]new- R[i]old) Reduces Runtime of PageRank by 50%!

  49. Caching to Accelerate Gather Mirror Mirror Master Mirror Machine 1 Machine 2 Machine 2 Skipped Gather Invocation Gather Y Y Y Σ Σ1 Σ2 Y + + + Machine 3 Machine 4 Σ3 Σ4

  50. Minimizing Communication in PowerGraph Communication is linear in the number of machines each vertex spans Y Y Y A vertex-cut minimizes machines each vertex spans • Percolation theory suggests that power law graphs have good vertex cuts. [Albert et al. 2000]

More Related