1 / 18

Managing Parallel Computational Tasks in a Grid Environment

Managing Parallel Computational Tasks in a Grid Environment. Institute for System Programming Russian Academy of Sciences. A.I. Avetisyan, S.S. Gaissarian, D.A. Grushin, N.N. Kuzjurin, and A.V. Shokurov. Tasks allocation by brokers. Task Description.

zacharyl
Télécharger la présentation

Managing Parallel Computational Tasks in a Grid Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Managing Parallel Computational Tasks in a Grid Environment Institute for System Programming Russian Academy of Sciences A.I. Avetisyan, S.S. Gaissarian, D.A. Grushin, N.N. Kuzjurin, and A.V. Shokurov

  2. Tasks allocation by brokers

  3. Task Description 1) maximum Pjand minimum Pjnumber of processors for task j 2) t(p), pj< p < Pj execution time on p processors 3) deadline dj 4) Cost Ij Example: Pj= pj, Ij= pj*t(pj) .

  4. Tasks allocation by brokers: task j • Time < dj • Cost to min • Broker chooses cluster that minimize cost and satisfies the deadline

  5. Scheduling Tasks by Clusters Scheduling parallel tasks NP-hard optimization problem Approximation algorithms, Heuristics Bottom-Left algorithm .

  6. Geometric representation of tasks Pj=pj number of processors t(pj) execution time t(pj) pj

  7. Scheduling as strip-packing

  8. Scheduling parallel tasks by a cluster: notations RT - relative throughput Throughput is the number of tasks completed before deadline

  9. Scheduling parallel tasks by a cluster: notations Density D Intensity I, Deadline T ti- execution time of task i T_total - total throughput of all clusters

  10. Simulation • Six clusters: 2 - 512 processors • 2 - 256 processors • 2 - 128 processors Random set of tasks L% 256 < pj< 512 M% 128 < pj< 256 S% 10 < pj< 128

  11. Simulation: notations • RT - relative throughput • Sorting tasks by their widths • (width=number of processors) • RT(unsorted) • RT(sorted)

  12. Computational Experiments:one broker Density RT (unsorted) RT (sorted) 0.9696 0.8314 0.8783 0.9232 0.8759 0.9184 0.8542 0.9310 0.9963 0.7857 0.9861 1.0 Table 1. Relative throughput RT (L=10 %)

  13. Computational Experiments:one broker Density RT (unsorted) RT (sorted) 0.9739 0.7703 0.8769 0.9240 0.7995 0.9308 0.9008 0.8160 0.9471 0.8008 0.9163 1.0 0.7207 1.0 1.0 Table 2. Relative throughput RT (L=20 %)

  14. Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 4500 • Density: 0.9915203373015873 • Allocation: 0.7258018765273988 • Cluster{512}: 43 • Cluster{512}: 42 • Cluster{256}: 42 • Cluster{256}: 39 • Cluster{128}: 52 • Cluster{128}: 47 • Not allocated: 23

  15. Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 4800 • Density: 0.9295503162202381 • Allocation: 0.77119447897724 • Cluster{512}: 43 • Cluster{512}: 38 • Cluster{256}: 39 • Cluster{256}: 44 • Cluster{128}: 56

  16. Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 5000 • Density: 0.8923683035714286 • Allocation: 0.8093943934304031 • Cluster{512}: 39 • Cluster{512}: 41 • Cluster{256}: 44 • Cluster{256}: 42 • Cluster{128}: 56 • Cluster{128}: 50 • Not allocated: 16

  17. Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 5980 • Density: 0.7461273441232681 • Allocation: 1.0 • Cluster{512}: 41 • Cluster{512}: 40 • Cluster{256}: 46 • Cluster{256}: 41 • Cluster{128}: 61 • Cluster{128}: 59 • Not allocated: 0

  18. Future works • 1) better scheduling (strip-packing) algorithms • 2) migration of tasks • 3) variable execution time

More Related