1 / 29

Fault-tolerant Adaptive Divisible Load Scheduling

Fault-tolerant Adaptive Divisible Load Scheduling. Xuan Lin, Sumanth J. V. Acknowledge: a few slides of DLT are from Thomas Robertazzi ’ s presentation. Outline. Introduction (DLT) Adaptive Divisible Load Scheduling Simulation Conclusion. What is a Divisible Load?.

jenny
Télécharger la présentation

Fault-tolerant Adaptive Divisible Load Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fault-tolerant Adaptive Divisible Load Scheduling Xuan Lin, Sumanth J. V. Acknowledge: a few slides of DLT are from Thomas Robertazzi’s presentation

  2. Outline • Introduction (DLT) • Adaptive Divisible Load Scheduling • Simulation • Conclusion

  3. What is a Divisible Load? • A computational & networkable load that is arbitrarily partitionable (divisible) amongst processors and links. • There are no precedence relations between subtasks. • Communication cost between head-node and the processors should be considered.

  4. Timing Diagram

  5. m+1 unknows vs. m+1 Eqs. • Recursive equations: • Normalization equation:

  6. Issues when applying the theory • How to decide the parameters in run-time? • The parameters may change during the computation. • Solution: Adaptive Strategy

  7. Condor Grid Environment • Existing condor lab pool at UNL. • Processing capability of available nodes can vary significantly over time • Consider anti-virus scans, OS updates. • Can ignore short term variations. • Network dynamics can be quite significant. • Dynamic number of processors.

  8. Condor Grid Environment • Unpredictable availability. • Job suspension/migration. • Likely failure. • Node reboot/crash.

  9. Adapting DLT to Condor • DLT assumes that execution time for a fixed data set is constant for a given processor. • Its predicted execution time can vary significantly from real execution time.

  10. Adaptive Divisible Load Scheduling [D. Ghose et.al. 2005] • Two phases: probing phase and optimal loaddistribution phase • Probing and Delayed Distribution (PDD)

  11. Probing and Delayed Distribution (PDD) • Total workload is divided into p equal pieces. • The first piece is used to do the probing. • The first piece is further divided into n equal pieces, and each processors are assigned one piece. • The second phase does not start until it receives all feedback. • When the second phase starts, since we know all the parameters of the system, DLF can guide us to do the optimal distribution.

  12. Probing and Delayed Distribution (PDD)

  13. Limitation of PDD • Most current work assumes a cluster computing environment • Node failure is ignored. • Dynamic change in number of processors is ignored. • Once parameter estimation is completed, static environment is assumed. • Not truly adaptive. • If one or several processors give their feedback significantly slow than others, it will suffer a lot of idle time in the probing face.

  14. Our Algorithm • I1- The group contains nodes that have sent back feedback, and we do optimal distribution to them. • I2- The group contains nodes that have sent back feedback, and we do not do optimal distribution at this round, but may do optimal distribution in the future. • I3- The group contains nodes that have not sent back feedback yet. • Two phases.

  15. Our Algorithm – Probing Phase • Initially, I1,I2 are empty. I3 contains all the available processors. • The total workload is divided into p equal pieces. • Step1: One piece will be further divided into n equal pieces and sent to each processors. • Step2: When distribution is completed, check whether we get any feedback yet. If not, goto Step1.

  16. Our Algorithm – Optimal Distribution Phase • Step3: Assume we get k new feedback, if we this is the first time we get feedback, simply add these processors to I1. Otherwise, goto Step4. • Step4: According to the feedback, we can calculate the speed of the processors and the network, calculate the available time of these processors. (These processors may not available now since in the probing phase, we may have sent several probing pieces.

  17. Our Algorithm – Optimal Distribution Phase • Step5: If the available time of the processors smaller than the current maximum available time (we will define later), add them to group I1, otherwise, add them to group I2. • Step6: Assume the current size of group I1 is K, update their parameters (cpu speed and link speed), also calculate their available time and we record the maximum one as the current maximum available time, then we do the optimal distribution to these K processors . Repeat this step.

  18. Simple Illustration

  19. Our Algorithm • Scheduling Point are defined as every time when we finish distribution of current round. • Accept New Nodes: At each Scheduling Point, we will check if there are new processors available. If there are, we send probing pieces to them and add them to I3. • Fault Tolerance: At each Scheduling Point, we will check weather some processors are timeout. If so, delete those nodes.

  20. Simulation • Initially Configuration Total workload =1000 Initially we have 8 nodes. p =100

  21. Experiment1- Static Enviroment • Homogeneous cpu speed = 1000, network speed = 10 • Heterogeneous

  22. Experiment1

  23. Experiment2

  24. Experiment3

  25. Experiment4 : New Nodes Available

  26. Experiment5: Fault-Tolerance

  27. Conclusion • If some nodes are significant slower than other nodes, our algorithm is better. • If the probing information is not accurate, our algorithm is better. • If in a long term, the network and the processor's average speeds are stable, single round algorithm will beat multiround. • Our algorithm has the ability to adapt new available processors. • Our algorithm is fault-tolerant.

  28. Future work • More accurate distribution in the second round. • More evaluation to find the relation of the performance and the parameters. • Mechanism to decide weather we should accept the processors we discarded before.

  29. Questions ??? • Thank you !

More Related