1 / 54

DPS – Dynamic Parallel Schedules

DPS – Dynamic Parallel Schedules. Sebastian Gerlach, R.D. Hersch Ecole Polytechnique Fédérale de Lausanne. Context. Cluster server systems: applications dynamically release or acquire resouces at run time. Challenges. Execution graph made of compositional Split-Compute-Merge constructs.

norris
Télécharger la présentation

DPS – Dynamic Parallel Schedules

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DPS – Dynamic Parallel Schedules Sebastian Gerlach, R.D. Hersch Ecole Polytechnique Fédérale de Lausanne

  2. Context Cluster server systems: applications dynamically release or acquire resouces at run time

  3. Challenges Execution graph made of compositional Split-Compute-Merge constructs Compute Split into subtasks Merge results Task farm Generalization

  4. Challenges Creation of compositional Split-Operation-Merge constructs at run time

  5. Challenges Reusable parallel components at run time Parallel service component Parallel application calling the parallel service component

  6. Overview • DPS design and concepts • Implementation • Application examples

  7. Process Data Read File Read File Request Data Buffer Data Buffer What is DPS? • High-level abstraction for specifying parallel execution behaviour • Based on generalized split-merge constructs • Expressed as a directed acyclic graph of sequential operations • Data passing along the graph are custom Data Objects

  8. A Simple Example • The graph describing application flow is called flowgraph • Illustrates three of the basic DPS operations: • Split Operation • Leaf Operation • Merge Operation • Implicit pipelining of operations Read Data Process Data Read Data Process Data Split Read Data Process Data Merge

  9. Split/Merge Operation Details • The Split operation takes one data object as input, and generates multiple data objects as output • Each output data object represents a subtask to execute • The results of the tasks are collected later in a corresponding Merge operation • Both constructs are can contain user-provided code, allowing for high flexibility in their behavior

  10. Mapping concepts • The flowgraph only describes the sequencing of an application • To enable parallelism, the elements of the graph must be mapped onto the processing nodes • Operations are mapped onto Threads, grouped in Thread Collections • Selection of Thread in Thread collection is performed using routing functions

  11. Mapping of operations to threads Application Flowgraph Read Data Process Data Split Merge Main Thread Collection (1 thread) ReadData Thread Collection (3 threads) ProcessData Thread Collection (3 threads) Main Thread Collection (1 thread) The Thread Collections are created and mapped at runtime by the application

  12. Routing • Routing functions are attached to operations • Routing functions take as input • The Data Object to be routed • The target Thread Collection • Routing function is user-specified • Simple: Round robin, etc. • Data dependent: To node where specific data for computation is available

  13. Threads • DPS Threads are also data structures • They can be used to store local data for computations on distributed data structures • Example: n-Body, matrix computations • DPS Threads are mapped onto threads provided by the underlying Operating System (e.g. one DPS Thread = one OS Thread)

  14. Flow control and load balancing • DPS can provide flow control between split-merge pairs • Limits the number of data objects along a given part of the graph • DPS also provides routing functions for automatic load balancing in stateless applications Process Data Split Merge

  15. Runtime • All constructs (schedules, mappings) are specified and created at runtime Base for graceful degradation Base for data-dependent dynamic schedules

  16. Callable schedules • Complete schedules can be inserted into other schedules. These schedules may • be provided by a different application • run on other processing nodes • run on other operating systems  Schedules become reusable parallel components

  17. Callable schedules User App #1 User App #1 User App #1 User App #2 User App #2 User App #2 Striped File System Striped File System Striped File System Striped File System

  18. Execution model • A DPS kernel daemon is running on all participating machines • The kernel is responsible for starting applications (similar to rshd) Node 1 Node 2 Node 3 Node 4 Kernel Kernel Kernel Kernel App #1 App #1 App #1 App #2 App #2 App #2 SFS SFS SFS SFS

  19. Implementation • DPS is implemented as a C++ library • No language extensions, no preprocessor • All DPS constructs are represented by C++ classes • Applications are developped by deriving custom objects from the provided base classes

  20. Operations • Custom operations derive from provided base classes (SplitOperation, LeafOperation, …) • Developer overloads execute method to provide custom behavior • Output data objects are sent with postToken • Classes are templated for the input and output token types to ensure graph coherence and type checking

  21. Operations class Split : public SplitOperation <MainThread,tv1(SplitInToken),tv1(SplitOutToken)> { public: void execute(SplitInToken *in) { for(Int32 i=0;i<IMAGE_SIZE_Y;i+=32) { postToken(new SplitOutToken(i,32,j)); } dump(0,"Finished split, %d tokens posted",j); } IDENTIFYOPERATION(Split); }; REGISTEROPERATION(Split);

  22. Threads • Derives from provided Thread class • One instance of this class is created for every thread in a collection • This class can contain data elements • They are preserved as long as the thread exists • Operations executing on a thread can access thread local variables using member thread

  23. Threads class ProcessThread : public Thread { public: Buffer<UInt8> world[2]; Int32 firstLine; Int32 lineCount; Int32 active; IDENTIFY(ProcessThread); }; REGISTER(ProcessThread);

  24. Routing • Routing functions derive from base class Route • Overloaded method route selects target thread in collection based on input data object • DPS also provides built-in routing functions with special capabilities

  25. Routing Class RoundRobinRoute : public Route<SplitOutToken> { public: Int32 route(SplitOutToken *currentToken) { return currentToken->target%threadCount(); } IDENTIFY(RoundRobinRoute); }; REGISTER(RoundRobinRoute);

  26. Data Objects • Derive from base class Token • They are user-defined C++ objects • Serialization and deserialization is automatic • Data Objects are not serialized when passed to another thread on a local machine (SMP optimization)

  27. Data Objects class MergeInToken : public ComplexToken { public: CT<Int32> sizeX; CT<Int32> sizeY; Buffer<UInt8> pixels; CT<Int32> scanLine; CT<Int32> lineCount; MergeInToken() { } IDENTIFY(MergeInToken); }; REGISTER(MergeInToken); • All used types derive from a common base class (Object) • The class is serialized by walking through all its members

  28. Thread collections • Creation of an abstract thread collection Ptr<ThreadCollection<ComputeThread> > computeThreads = new ThreadCollection<ComputeThread>("proc"); • Mapping of thread collection to nodes • Mapping is specified as a string listing nodes to use for the threads computeThreads->map("nodeA*2 nodeB"); 2 Threads 1 Thread

  29. Graph construction FlowgraphNode<Split,MainRoute> s(mtc); FlowgraphNode<Process1,RoundRobinRoute> p1(ptc); FlowgraphNode<Process2,RoundRobinRoute> p2(ptc); FlowgraphNode<Merge,MainRoute> m(mtc); FlowgraphBuilder gb; gb = s >> p1 >> m; gb += s >> p2 >> m; Ptr<Flowgraph> g=new Flowgraph(gb,"myGraph"); 1 2

  30. Application examples Game of life • Requires neighborhood exchanges • Uses thread local storage for world state Neighbor exchange Computation master master master master worker i worker i worker i-1 worker i worker i+1 worker j worker j worker j worker j-1 worker j+1

  31. Improved implementation • Parallelize data exchange and computation of non-boundary cells worker i worker i worker i-1 worker i master master worker i+1 worker j-1 worker j worker j worker j worker j+1 worker i worker j

  32. Performance

  33. Interapplication communication Client application Display graph Game of Life Collection graph Computation graph

  34. trsm trsm trsm LU LU decomposition Compute LU factorization of block, stream out trsm requests. Compute trsm, perform row flipping, return notification.

  35. mult, store mult, store mult, store mult, store mult, store mult, store LU mult, store mult, store mult, store LU decomposition As soon as first column is complete, perform LU factorization. Stream out trsm while other columns complete the multiplication. Send row flip to previous columns to adjust for pivoting Collect notifications, stream out multiplication orders. Multiply, subtract and store result, send notification.

  36. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21

  37. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A12 A12 A12 A21 A21 A21

  38. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 T12 T12 T12 L21 L21 L21

  39. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A11 A21 A21 A21

  40. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11

  41. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21 A21

  42. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21 A21

  43. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21

  44. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21 A21

  45. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21 A21

  46. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21

  47. LU decomposition trsm xchg trsm trsm Mul Mul Mul xchg xchg trsm trsm lu 9x 4x Mul Mul xchg xchg trsm xchg lu lu lu A11 A21 A21 A21

  48. Multiplication graph = Mul Split Store result Collect operands Multiply P1 P2 P3 P4

  49. Split Store result Collect operands Multiply Multiplication graph = Mul P1 P2 P3 P4

  50. Multiplication graph = Mul Split Store result Collect operands Multiply P1 P2 P3 P4

More Related