1 / 31

Scheduling Jobs with Varying Parallelizability

Scheduling Jobs with Varying Parallelizability. Ravishankar Krishnaswamy Carnegie Mellon University. Outline. Motivation Formalization of Problems A Concrete Example Generalizations. Motivation. Consider a scheduler on a multi-processor system

ricky
Télécharger la présentation

Scheduling Jobs with Varying Parallelizability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling Jobs with Varying Parallelizability RavishankarKrishnaswamy Carnegie Mellon University

  2. Outline • Motivation • Formalization of Problems • A Concrete Example • Generalizations

  3. Motivation • Consider a scheduler on a multi-processor system • Different jobs of varying importance arrive “online” • Each job is inherently decomposed into stages • Each stage has some degree of parallelism • Scheduler is not aware of these • Only knows when a job completes! • Can we do anything “good”?

  4. Formalization Scheduling Model Online, Preemptive, Non-Clairvoyant Job Types Varying degrees of Parallelizability Objective Minimize the average flowtime (or some function of them)

  5. Explaining away the terms • Online • we know of a new job only at the time when it arrives. • Non-Clairvoyance • we know “nothing” about the job • like runtime, extents of parallelizability, etc. • job tells us “it’s done” once it is completely scheduled

  6. Explaining away the terms • Varying Parallelizability [ECBD STOC 1997] • Each job is composed of different stages • Each stage r has a degree of parallelizability Γr(p) • How parallelizable the stage is, given p machines • Γ(p) Makes no sense to allocate more cores p

  7. Special Case: Fully Parallelizable Stage • Γ(p) p

  8. Special Case: Sequential Stage In general, Γisassumed to be Non Decreasing Sublinear • Γ(p) 1 1 p

  9. Explaining away the terms • Objective Function Average flowtime:minimize Σj (Cj – aj) L2 Norm of flowtime:minimize Σj (Cj – aj)2 etc.

  10. Can we do anything at all? • Yes, with a little bit of resource augmentation • Online algorithm uses ‘sm’ machines to schedule an instance on which OPT can only use ‘m’ machines. O(1/∈)-competitive algorithm with (2+ ∈)-augmentation for minimizing average flowtime[E99]

  11. Outline • Motivation • Formalization of Problems • A Concrete Example • Generalizations

  12. The Case of UnweightedFlowtimes • Instance has n jobs that arrive online, m processors (online algorithm can use sm machines) • Each job has several stages, each with its own ‘degree of parallelizability’ curve. • Minimize average flowtime of the jobs (or by scaling, the total flowtime of the jobs)

  13. The Case of UnweightedFlowtimes • Algorithm: • At each time t, let NA(t) be the unfinished jobs in our algorithms queue. • For each job, devote sm/NA(t) share of processing towards it. • This is O(1)-competitive with O(1) augmentation. (In paper, Edmonds and Pruhs get O(1+ ∈) O(1/ ∈2)-competitive algorithm)

  14. High Level Proof Idea [E99] General Instance - Non-Clairvoyant Alg can’t distinguish the 2 instances - Also ensure OPTR≤ OPTG Restricted Extremal Instance Each stage is either fully parallelizable or fully sequential - Show that EQUI (with augmentation) is O(1) competitive against OPTR Solve Extremal Case

  15. Reduction to Extremal Case • Consider an infinitesimally small time interval [t, t+dt) • ALG gives p processors towards some j. OPT gives p* processors to get same work done (before or after t). Γ(p) dt = Γ(p*) dt* If p < p*, replace this work with pdt “parallel work”. If p ≥ p*, replace this work with dt “sequential work”. ALG is oblivious to change OPT can fit in new work in-place t t*

  16. High Level Proof Idea [E99] General Instance Restricted Extremal Instance Solve Extremal Case

  17. Amortized Competitiveness Analysis Contribution of any alive job at time t is 1 Total rise of objective function at time t is |NA(t)| Would be done if we could show (for all t) |NA(t)|≤ O(1) |NO(t)| Cj - aj

  18. Amortized Competitiveness Analysis • Sadly, we can’t show that. • There could be situations when |NA(t)| is 100 and |NO(t)| is 10 and vice-versa too. Way around: Use some kind of global accounting. When we’re way behind OPT When OPT pay lot more than us

  19. Banking via a Potential Function • Resort to an amortized analysis • Define a potential function Φ(t) which is 0 at t=0 and t= • Show the following: • At any job arrival, ΔΦ ≤ αΔOPT (ΔOPT is the increase in future OPT cost due to arrival of job) • At all other times, Will give us an (α+β)-competitive online algorithm

  20. For our Problem • Define • rank(j) is sorted order of jobs w.r.t arrivals. (most recent has highest rank) • ya(j,t) - yo(j,t) is the ‘lag’ in parallel work algorithm has over optimal solution

  21. Arrivals and Departures are OK • Recall • When new job arrives, ya(j,t) = yo(j,t). Hence, that term adds 0. • For all other jobs, rank(j) remains unchanged. • Also, when our algorithm completes a job, some ranks may decrease, but that causes no drop in potential.

  22. Running Condition • Recall • At any time instant, Φincreases due to OPT working and decreases due to ALG working. • Notice that in the worst case, OPT is working on most recently arrived job, and hence Φincreases at rate of at most |NA(t)|.

  23. What goes up must come down.. • Φwill drop as long as there the algorithm is working on jobs in their parallel phase which OPT is ahead on. • If they are in a sequential phase, they don’t decrease in Φ. • If OPT is ahead on a job in parallel phase, max(0, ya(j) - yo(j)) = 0. • Suppose there are very few (say, |NA(t)|/100) which are ‘bad’. • Then, algorithm working drops Φ for most jobs. • Drop is at least (counters both ALG’s cost and the increase in Φ due to OPT working)

  24. Putting it all together • So in the good case, we have • Handling bad jobs • If ALG is leading OPT in at least |NA(t)|/200 jobs, then we can charge LHS to 400 |NO(t)|. • If more than |NA(t)|/200 jobs are in sequential phase, OPT must pay 1 for each of these job sometime in the past/future. (observe that no point in OPT will be double charged) Integrating over time, we get c(ALG) ≤ 400 c(OPT) + 400 c(OPT)

  25. Outline • Motivation • Formalization of Problems • A Concrete Example • Generalizations

  26. Minimizing L2 norm of Flowtimes[GIKMP10] • Round-Robin does not work • 1 job arrives at t=0, and has some parallel work to do. • Subsequently some unit sized sequential jobs arrive every time step. • Optimal solution: just work on first job. • Round-Robin will waste lot of cycles on subsequent jobs, and incur a larger cost on job 1 (because of flowtime2)

  27. To Fix the problem • Need to consider “weighted” round-robin, where age of a job is its weight. • Generalize the earlier potential function • handle ages/weights • can’t charge sequential parts directly with optimal (if they were executed at different ages, it didn’t matter in L1 minimization) • Get O(1/∈3)-competitive algorithm with (2+ ∈)-augmentation.

  28. Other Generalization • [CEP09] consider the problem of scheduling such jobs on machines whose speeds can be altered, with objective of minimizing flowtime + energy; they give a O(α2log m) competitive online algorithm. • [BKN10] use similar potential function based analysis to get (1+∈)-speed O(1)-competitive algorithms for broadcast scheduling.

  29. Conclusion • Looked at model where jobs have varying degrees of parallelism, and non-clairvoyant scheduling • Outlined analysis for a O(1)-augmentation O(1)-competitive analysis • Described a couple of recent generalizations • Open Problems • Improving augmentation requirement/ showing a lower bound on Lp norm minimization • Close the gap in flowtime+energy setting

  30. Thank You! Questions?

  31. References • [ECBD97] Jeff Edmonds, Donald D. Chinn, Tim Brecht, Xiaotie Deng:Non-clairvoyant Multiprocessor Scheduling of Jobs withChanging Execution Characteristics. STOC 1997. • [E99] Jeff Edmonds: Scheduling in the Dark. STOC 1999. • [EP09] Jeff Edmonds, Kirk Pruhs: Scalably scheduling processeswith arbitrary speedup curves. SODA 2009. • [CEP09] Ho-Leung Chan, Jeff Edmonds, Kirk Pruhs: Speed scaling of processes with arbitrary speedup curves on a multiprocessor. SPAA 2009. • [GIKMP10] Anupam Gupta, SungjinIm, RavishankarKrishnaswamy, Benjamin Moseley, Kirk Pruhs: Scheduling processes with arbitrary speedup curves to minimize variance. Manuscript 2010

More Related