1 / 19

AutomaDeD : Scalable Root Cause Analysis

AutomaDeD : Scalable Root Cause Analysis. ParaDyn / DynInst Week. March 26, 2012. Todd Gamblin Work with Ignacio Laguna and Saurabh Bagchi at Purdue University. AutomaDeD ( DSN ’10) : Fault detection & diagnosis in MPI applications. Collect traces using MPI wrappers

zita
Télécharger la présentation

AutomaDeD : Scalable Root Cause Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AutomaDeD:Scalable Root Cause Analysis ParaDyn/DynInst Week • March 26, 2012 • Todd Gamblin • Work with Ignacio Laguna and • SaurabhBagchi at Purdue University

  2. AutomaDeD (DSN ’10):Fault detection & diagnosis in MPI applications • Collect traces using MPI wrappers • Before and after call • Collect call-stack info • Time measurements MPI Application … Task1 Task2 TaskN MPI Profiler MPI_Send(…) { tracesBeforeCall(); PMPI_Send(…); tracesAfterCall(); … } S1 S1 S1 … S2 S2 S2 S3 S3 S3 Semi-Markov Models (SMM) Find offline: phase task code region

  3. Modeling Timing and Control-Flow Structure • States: (a) code of MPI call, or (b) code between MPI calls • Edges: • Transition probability • Time distribution MPI Application … Task1 Task2 TaskN MPI Profiler S1 S1 S1 … S2 S2 S2 S3 S3 S3 Send() Semi-Markov Models (SMM) [0.6] , Find offline: After_Send() phase task code region

  4. Detection of AnomalousPhase / Task / Code-Region • Dissimilarity between models Diss (SMM1, SMM2) ≥ 0 • Cluster the models • Find unusual cluster(s) • Use known ‘normal’ clustering configuration MPI Application … Task1 Task2 TaskN MPI Profiler S1 S1 S1 … S2 S2 S2 Master-slave program S3 S3 S3 M2 M3 M9 M1 M4 M5 M6 M7 Semi-Markov Models (SMM) M8 M10 M11 M12 Find offline: Unusual cluster Faulty tasks phase task code region

  5. Making AutomaDeD scalable:What was necessary? • Replace offline clustering with fast online clustering • Offline clustering is slow • Requires data to be aggregated to a single node

  6. AutomaDeD uses CAPEK for scalable clustering P8 P9 P10 P11 P12 P13 P14 P15 P4 P5 P6 P7 P0 P1 P2 P3 Random Sample Gather Samples Cluster Samples Broadcast representatives Find nearest representatives 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Compute “Best” Clustering Classify objectsinto clusters • CAPEK algorithm scales logarithmically with number of processors in the system • Runs in less than half a second at with 131k cores • Feasible to cluster on trans-petascale and exascale systems • CAPEK uses aggressive sampling techniques • This induces error in results, but error is 4-7%

  7. Making AutomaDeD scalable:What was necessary? • Replace offline clustering with fast online clustering • Scalable outlier detection • By nature, sampling is unlikely to include outliers in the results • Need something to compensate for this • Can’t rely on the “small” cluster anymore

  8. We have developed scalable outlier detection techniques using CAPEK • The algorithm is fully distributed • Doesn’t require a central component to perform the analysis • Complexity O(log #tasks) • Perform clustering using CAPEK • Find distance from each task to its medoid • Normalize distances using within-cluster standard deviation • Find top-k outliers sorting tasks based on the largest distances Outlier task Cluster medoid

  9. Making AutomaDeD scalable:What was necessary? • Replace offline clustering with fast online clustering • Scalable outlier detection • Replace glibcbacktrace with LLNL callpath library • Glibcbacktrace functions are slow, require parsing • Callpath library uses module-offset representation • Can transfer canonical callpaths at runtime • Can do fast callpath comparison (pointer compare)

  10. Making AutomaDeD scalable:What was necessary? • Replace offline clustering with fast online clustering • Scalable outlier detection • Replace glibcbacktrace with LLNL callpath library • Glibcbacktrace functions are slow, require parsing • Callpath library uses module-offset representation • Can transfer canonical callpaths at runtime • Can do fast callpath comparison (pointer compare) • Graph compression for Markov Models • MM’s contain a lot of noise • Noisy models can slow comparison and obfuscate problems

  11. Too Many Graph Edges:The Curse of Dimensionality Sample SMM graph of NAS BT run • Too many edges = Too many dimensions • Distances between points tend to become almost equal • Poor performance of clustering analysis ~ 200 edges in total …

  12. Example of Graph Compression Semi-Markov Model Sample code Compressed Semi-Markov Model =

  13. We are developing AutomaDeD into a framework for many types of distributed analysis Concise modelof control flow • Our graph compression and scalable outlier detection enables automatic bug isolation in: ~ 6 seconds with 6,000 tasks on Intel hardware ~ 18 seconds at 103,000 cores on BG/P • Logarithmic scaling implies billions of tasks will still take less than 10 seconds • We are developing new on-node performance models to target resilience problems as well as debugging State compression/noise reduction Scalable Outlier Detection KNN CAPEK Abnormal run PMPI measurement createson-node control-flow model

  14. We have added probabilistic root cause diagnosis to AutomaDeD • Scalable AutomaDeD can tell us: • Anomalous processes • Anomalous transitions in the MM • We want to know what code is likely to have caused the problem • Need more sophisticated analysis for this • Need distributed dependence information to understand distributed hangs

  15. Progress Dependence • Very simple notion of dependence through MPI operations • Support collectives and point to point • MPI_Isend, Irecv running dependence through completion operation (MPI_Wait, Waitall, etc) • Similar to postdominance, but there is no requirement that there be an exit node • Exit nodes may not be there in the dynamic call tree, especially if there Is a fault

  16. Building the Progress Dependence Graph (PDG) • We build the PDG using a parallel reduction • We estimate dependence edges between tasks using control flow stats from our Markov Models • PDG allows us to find the least-progress (LP) process

  17. Our distributed pipeline enables fast root cause analysis • Full process has O(log(P)) complexity • Distributed analysis requires < 0.5 sec on 32,768 processes • Gives programmers insight into the exact lines that could have caused a hang. • We use DynInst’s backward slicing at the root to find likely causes

  18. We have used AutomaDeD to find performance problems in a real code • Discovered cause of elusive hang in I/O phase of ddcMD molecular dynamics simulation at scale • Only occurred with 7,000+ processes

More Related