1 / 55

Chapter 3 Task Assignment and Scheduling

Chapter 3 Task Assignment and Scheduling. 3.1 Introduction 3.2 Rate monotonic analysis 3.3 Other uniprocessor scheduling algorithms 3.4 Task assignment 3.5 Fault-tolerant scheduling. Objective : Look at techniques for allocating & scheduling task to ensure deadline is met. Introduction.

mari-west
Télécharger la présentation

Chapter 3 Task Assignment and Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Task Assignment and Scheduling 3.1 Introduction 3.2 Rate monotonic analysis 3.3 Other uniprocessor scheduling algorithms 3.4 Task assignment 3.5 Fault-tolerant scheduling Objective : Look at techniques for allocating & scheduling task to ensure deadline is met

  2. Introduction • Real-time computing objective : • Execute, by appropriate deadlines it’s control tasks • Objective of Chapter: • Techniques for allocating & scheduling tasks on processors to ensure that deadlines are met. Real-Time Systems (Dr Shamala)

  3. Scheduling Scheduling research growth 1970 Years Real-Time Systems (Dr Shamala)

  4. The allocation/scheduling problem can be stated as follows: • Given a set of factors affecting allocation/scheduling • Tasks (consumes resources) • number of tasks, priorities • task characteristics • periodicity • timing constraints • task precedence constraints (best described using precedence graph) • resource requirements • inter-task interactions We are asked to devise a feasible allocation / schedule on a given computer Real-Time Systems (Dr Shamala)

  5. T1 T2 T3 T4 T6 T5 T7 T8 Precedence Graph Real-Time Systems (Dr Shamala)

  6. Precedence Graph • The arrows indicate which task has precedence over the other task. • We denote the precedence task set of task T by <(T)that is , (T) indicates which tasks must be completed before Y can begin. Real-Time Systems (Dr Shamala)

  7. T1 T2 T3 T4 T6 T5 T7 T8 Precedence Graph (Cont.) • <(1) =  • <(2) = {1} • <(3) = {1} • <(4) = {1} • <(5) = {1,2,3} or {2,3} • <(6) = {1,3,4} • <(7) = {1,3,4,6} • <(8) = {1,3,4,6,7} Real-Time Systems (Dr Shamala)

  8. Precedence Graph (Cont.) • We can also write : i < j to indicate that task Ti must precede task Tj. • It can also be written as : j > i • The precedence operator is transitive: i< j and j < k  i < k • For economic representation: Only the list of immediate ancestors in the precedence set: • E.g. < (5) = {2,3} since <(2) = {1} Real-Time Systems (Dr Shamala)

  9. Each task has : • Each task requires resources. Eg. Processor execution time, memory or access to a bus • Resources examples: • Resources may be (depending to its usage): • Exclusively held by a task • Release Time of a task- the time at which all the data that are required to begin executing the task are available. • Deadline – the time at which the task must complete its execution. (Deadline  maybe hard or soft). • Relative deadline = Absolute deadline – release time Real-Time Systems (Dr Shamala)

  10. Each task …(cont.) • Task • Periodic • every Pi seconds, the constraints is that it has to run exactly once every period. • Every period is generally  Deadline • Sporadic • not periodic but has an upper bound on the rate in which it has to be invoked. • Irregular intervals • Aperiodic • Not periodic but has no upper bound Real-Time Systems (Dr Shamala)

  11. T1 T4 T2 T3 T5 T6 T7 T8 • Precedence constraints • inter-task relationship • precedence graph • <(T) : precedent-task set of task T • i < j : task Ti precedes task Tj • Resource requirements • exclusive • nonexclusive Real-Time Systems (Dr Shamala)

  12. Characteristics of task assignment/scheduling • feasible schedule • a valid schedule by which every task completes by its deadline • task assignment • in case of multiple processors • for a set of processors P, time t, set of tasks Τ, the schedule S is a function such that • S: P × t  Τ • S(i, t) : task scheduled to run on processor i at time t • online (dynamically) vs offline scheduling (precomputed) • Static(doesn’t change within a mode) vs dynamic priority algorithm • preemptive vs nonpreemptive scheduling Real-Time Systems (Dr Shamala)

  13. Inter-task interactions • inter-task communication • synchronous • asynchronous • mutual exclusion problem (synchronization) • priority inversion • chained blocking • deadlock Real-Time Systems (Dr Shamala)

  14. Assignment / scheduling problems • Most problems pertaining are more than two processors must make do with heuristics. Heuristics are motivated by the fact that uniprocessor scheduling are tractable. • Thus, multiprocessor schedule are divided into two (2) steps: • 1) assign tasks to processors • 2) Run a uniprocessor schedule to schedule the task allocated to each processor. If one or more schedules cannot be feasible, then we must either return to the allocation step and change the allocation or declare that a schedule cannot be found. Real-Time Systems (Dr Shamala)

  15. Developing a multiprocessor schedule Make an allocation Schedule each processor based on the allocation Change Allocation Are all these schedules feasible Continue Output schedule Check stopping criterion Stop Declare Failure Real-Time Systems (Dr Shamala)

  16. Overview • Uniprocessor scheduling algorithms • traditional rate-monotonic (RM) • rate-monotonic deferred server (DS) • earliest deadline first (EDF) • precedence and exclusion conditions • multiple task versions • IRIS tasks • increased reward with increased service • mode changes Real-Time Systems (Dr Shamala)

  17. Multiprocessor scheduling • utilization balancing algorithm • next-fit algorithm • bin-packing algorithm • myopic offline scheduling algorithm • focused addressing and bidding algorithm • assignment with precedence constraints • Critical sections • Fault-tolerant scheduling Real-Time Systems (Dr Shamala)

  18. Notation • n number of tasks in task set • ci execution time of task τi • Ti period of periodic task τi • Ii phase of periodic task τi • di relative deadline of task τi • Di absolute deadline of task τi • ri release time of task τi Real-Time Systems (Dr Shamala)

  19. Commonly Used Approaches • Weighted round-robin approach • tasks waiting in the FIFO queue • a task with weight wt get wt time slices every round • suitable for scheduling real-time traffic in high-speed switched networks • a switch downstream can begin to transmit an earlier portion of the message upon receipt of the portion, without having to wait for the arrival of the later portion • no need for sorted priority queue  speedup of scheduling Real-Time Systems (Dr Shamala)

  20. Priority-driven approach • never leaves any resource idle intentionally • greedy scheduling, list scheduling, work-conserving scheduling • most scheduling algorithms used in nonreal-time systems are priority-driven • preemptive vs. nonpreemptive Real-Time Systems (Dr Shamala)

  21. Clock-driven(time-driven) approach • tasks and their timing constraints are known a priori except for aperiodic tasks • relies on hardware timers • a static schedule • constructed off-line • cyclic schedule: periodic static schedule • clock-driven schedule: cyclic schedule for hard real-time tasks • foreground/background approach • foreground: interrupt-driven scheduling • background: cyclic executive (“Big loop”) Real-Time Systems (Dr Shamala)

  22. Foreground/Background Systems Foreground Background code block interrupt code block ISR . . . Loop interrupt ISR ISR ISR: interrupt service routine Real-Time Systems (Dr Shamala)

  23. A clock-driven scheduler • Input: Stored schedule (tk, τ(tk)) for k = 0, 1, …, N-1 • Task SCHEDULER: • set the next decision point i and table entry k to 0; • set the timer to expire at tk; • do forever: • accept timer interrupt; • if an aperiodic job is executing, preempt the job; • current task τ = τ(tk); • increment i by 1; • compute the next table entry k = i mod N; • set the timer to expire at fl(i/N)H + tk • { fl: floor function H: hyperperiod N: #tasks in H} • if the current task τ is an idle interval (or idle task), • let the job at the head of the aperiodic job queue execute; • else, let the task τ execute; • sleep; • end SCHEDULER Real-Time Systems (Dr Shamala)

  24. 3.2 Rate Monotonic Analysis • Assumptions • A1. No nonpreemptible parts in a task, and negligible preemption cost • A2. Resource constraint on CPU time only • A3. No precedence constraints among tasks • A4. All tasks periodic • A5. Relative deadline = period Real-Time Systems (Dr Shamala)

  25. Rate-Monotonic Scheduling(RMS) • Overview • rate monotonic priority • the higher rate, the higher priority • schedulability guaranteed if utilization rate is below a certain limit • for feasible schedules • fi = 1/Ti : frequency (=rate) • ci or Ci : execution time Real-Time Systems (Dr Shamala)

  26. 3.3 Other Uniprocessor Scheduling Algorithms • Period transformation for transient overload • a modified form of RM scheduling • Dynamic scheduling • earliest deadline first scheduling • least laxity first scheduling • Scheduling of IRIS tasks • imprecise computation • Scheduling of aperiodic tasks • Mode change Real-Time Systems (Dr Shamala)

  27. Period Transformation • Period transformation for transient overload • changes the period to cope with transient overloads (in terms of RM scheduling) • actually, to cope with semantic criticality in RM scheduling • example • tasks: T1: T1 = 12, C1 = 4, C1+ = 7 [Ci+: worst case] T2: T2 = 22, C2 = 10, C2+ = 14 utilization rates: average = 0.79, worst case = 1.22 • problem: if T2 is hard rt and T1 is soft (or not), how can we guarantee T2’s deadline in case of transient overload, and T1’s deadline in the average case? Real-Time Systems (Dr Shamala)

  28. (continued) • solution: boost priority of T2 by reducing its period replace T2 by T2’: T2’ = T2 /2, C2’ = C2 /2, C2’+ = C2+/2 • an alternative: lower the priority of T1 by lengthening its period • in this case, double the value of parameters • the new deadline must be ok Real-Time Systems (Dr Shamala)

  29. Earliest Deadline First Scheduling • Also know as Deadline Monotonic • EDF scheduling • dynamic priority based, deadline monotonic scheduling • Properties • EDF is optimal for uniprocessors • for periodic tasks with their relative deadline equal to periods: if the total utilization of the task set is no greater than 1, the task set can be feasibly scheduled on a single processor by EDF. • Allows preemptions. Real-Time Systems (Dr Shamala)

  30. Procedure 1. Sort task instances that require execution in time interval [0, L] in reverse topological order. 2. Initialize the deadline of the kth instance of task Ti to (k-1)Ti + di, if necessary 3. Revise the deadlines in reverse topological order. 4. Select the task with earliest deadline to execute Real-Time Systems (Dr Shamala)

  31. Uniprocessor Scheduling of IRIS Tasks • Introduction • Not necessary to run to completion. Iterative algorithms. • Task of this type are known as increased reward with increased service (IRIS) • reward function R(x) • typically • where r(x) is monotonically nondecreasing in x. • m, o : execution time of the mandatory and optional parts, respectively Real-Time Systems (Dr Shamala)

  32. 3.4 Task Assignment • Assignment of tasks to processors • use heuristics  cannot guarantee that an allocation will be found that permits all task to be feasibly scheduled. • consider communication costs precedence of task completion. • Sometime an allocation algorithm uses communication costs as part of its allocation criterion. Real-Time Systems (Dr Shamala)

  33. Utilization-balancing algorithm • Objective to balance processor utilization, and proceeds by allocating the tasks one by one and selecting the least utilized processor. • Considers running multiple copies for fault-tolerance systems. for each task Ti, do allocate one copy of Ti to each of the ri least utilized processors update the processor allocation to account for the allocation of Ti end do Real-Time Systems (Dr Shamala)

  34. A Next-fit algorithm for RM scheduling • Used in conjunction with RM • separation of allocation and scheduling • simplifies the scheduler to a local one • allocation: centralized, scheduler: distributed • objectives: • to partition a task set so that each partition is scheduled later for execution on a processor by RM scheduling • to use as few processors as possible • task characteristics • each task has constant period and deadline constraints • independent, no precedence constraints Real-Time Systems (Dr Shamala)

  35. allocation algorithm • n tasks • ui : utilization factor of Ti • Pi,j : set of tasks assigned to a processor • Nk : number of class-k processors used so far • tasks are divided into M classes such that • assigns k class-k tasks to each class-k processor, keeping the utilization factor of the class-M processor less than ln 2 Real-Time Systems (Dr Shamala)

  36. Algorithm Next-Fit-M for k = 1 to M do set Nk = 1; set i = 1; while i <= n do if Ti is a task from class-k, 1 <= k < M, then assign Ti to Pk,Nk; if Pk,Nk has currently k tasks assigned to it then set Nk = Nk +1 endif else (Ti is a task from class-M) if the total utilization factors of all the tasks assigned to PM,NM is greater than ln2-ui then set NM = NM + 1 endif assign Ti to PM,NM endif set i = i +1 endwhile if Pk,Nk has not task assigned to it then set Nk = Nk -1 Real-Time Systems (Dr Shamala)

  37. Bin-packing assignment algorithm for EDF • periodic independent preemptible tasks • bin-packing problem: assign tasks such that the sum of utilization factors does not exceed 1, and minimize the number of processors needed • first fit decreasing algorithm Initialize i to 1. Set U(j) = 0, for all j. (L : a list of tasks with their utilizations sorted in descending order, nT : # tasks ) while i <= nT do Let j = min{k | U(k) + u(i) <= 1}. Assign the i-th task in L to pj Set i = i + 1 . end while L- sorted list of task so their utilization are in non-increasing Real-Time Systems (Dr Shamala)

  38. Myopic Offline Scheduling (MOS) Algorithm • Offline Algorithm –given in advance arrival times, execution time and deadline. • Non-pre-emptive task • Not only processor resources but also others resources such as memory etc. Schedule Tree • MOS proceeds by building up a schedule tree. • Each node represents an assignment and scheduling of a subset of the tasks. • The root of the schedule tree is an empty schedule. • Each child of a node consists of a schedule of its parent node, extended by one task. • A leaf of this tree consist of the schedule of the entire task set. Real-Time Systems (Dr Shamala)

  39. Myopic Offline Scheduling (MOS) Algorithm • algorithm i) start with an empty partial schedule ii) determine if the current partial schedule is strongly feasible then proceed; else backtrack iii) extend the current partial schedule by one task (1) apply the heuristic function to the first Nk tasks in the task set (2) choose the task with the smallest heuristic value to extend the current schedule 2 Questions – which one task & when to stop Real-Time Systems (Dr Shamala)

  40. Develop a node if it is strongly feasible. If not feasible, we backtrack that is we mark that node as hopeless and then go back to its parents

  41. Focused addressing and bidding (FAB) • Introduction • online • distributed environment, loosely coupled • both critical and noncritical tasks • local scheduler: handles (critical) tasks arriving at a given node • global scheduler: schedules noncritical tasks across processor boundary • global state Real-Time Systems (Dr Shamala)

  42. FAB cont. • algorithms for global scheduling to which node the task should be sent • noncooperative algorithm-if enough resources for critical yes; else no for non-critical. • random scheduling algorithm-if a processor load is exceeding its threshold then another processor is chosen randomly. • focused addressing algorithm – overloaded processor checks its surplus info. and selects a processor which it feels it is able to process the task within its deadline. Prob: surplus info may be outdated. • bidding algorithm – simultaneous – lightly loaded to bid (Request For Bids) • flexible algorithm <-- focused addressing + bidding Real-Time Systems (Dr Shamala)

  43. focused addressing algorithm • FAS: focused addressing surplus, tunable parameter • locally unschedulable tasks sent to the node with the highest surplus ( > FAS) • if no such node is found, the task is rejected Real-Time Systems (Dr Shamala)

  44. bidding algorithm • first, select k nodes with sufficient surplus k: chosen to maximize the chances of finding a node • a request-for-bid(RFB) message is sent to these nodes • those nodes that receive RFB message • calculate a bid ( = likelihood that the task can be guaranteed) • send the bid to the bidder node if the bid > minimum bid req’d • the bidder sends the task to node that offers the best bid • if no good bid available, reject the task Real-Time Systems (Dr Shamala)

  45. symbols • pi: a processor node with a newly arriving task that is not locally guaranteed • ps: a node that is selected by FA algorithm • pt: a node that receives RFB message • the flexible algorithm (FAB algorithm) • pi selects k nodes with sufficient surplus • if the largest value of the surplus > FAS • the node with that surplus is chosen as focused node(ps) • pi sends the task to ps immediately • also, pi sends in parallel a RFB message to the remaining k-1 nodes. RFB contains info on ps • when a node receives the RFB message • it calculates a bid, sends the bid to ps if ps exists Real-Time Systems (Dr Shamala)

  46. (continued) • when the task reaches ps • it first invokes the local scheduler and checks the feasibility • if it succeeds, all the bids for the task will be ignored if it fails, ps evaluates the bids, sends the task to the node responding with the highest bid, and sends this info to pi • in case there is no focused node, pi will handle the bidding • if ps cannot guarantee the task and if there is no good bid available, then corrective actions follow Real-Time Systems (Dr Shamala)

  47. ps (focused node) (original node) pi network bidding Real-Time Systems (Dr Shamala)

  48. The Buddy Strategy • Same as FAB in the sense that if the processor is overloaded it will try to offload some task to lightly loaded processor. • However, it differs in the manner in which it finds the lightly loaded tasks: • Each processor has 3 thresholds of loading: • U:Under (TU), F: full (TF) and T: over (TV) • If a processor has a transition from F/T to U it broadcast an announcement to this effect. This broadcast is not to all processors but to a subset and this effect is known as a buddy effect. Real-Time Systems (Dr Shamala)

  49. 3.5 Fault-Tolerant Scheduling • Introduction • in case of hardware failure • Systems have sufficient reserve capacity and sufficiently fast failure-response mechanism. • multiple processors with a set of periodic tasks • multiple copies of each version of a task executed in parallel • the approach taken : ghost copies of tasks • embedded into the schedule • need not be identical to the primary copies • the tasks concerned are those that were to have been run by the failing processor Real-Time Systems (Dr Shamala)

  50. Fault-tolerant schedule • should be able to run one or more copies of each version (or iteration) of a task despite the failure of up to nsust processor • Output of each fault-tolerant processor • has a ghost schedule + 1+ primary schedules • makes room for ghosts by shifting primary copies. • feasible pair of a ghost schedule and a primary schedule • if both schedules can be merged/adjusted to be feasible Real-Time Systems (Dr Shamala)

More Related