1 / 52

Real-Time Systems, COSC-4301-01, Lecture 3

Real-Time Systems, COSC-4301-01, Lecture 3. Stefan Andrei. Reminder of Last Lecture. Real-Time Scheduling and schedulability analysis Schedulability test Schedulability utilization Optimal scheduler Determining computation time Uniprocessor scheduling

maris-bauer
Télécharger la présentation

Real-Time Systems, COSC-4301-01, Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Systems, COSC-4301-01, Lecture 3 Stefan Andrei COSC-4301-01, Lecture 3

  2. Reminder of Last Lecture Real-Time Scheduling and schedulability analysis Schedulability test Schedulability utilization Optimal scheduler Determining computation time Uniprocessor scheduling Scheduling preemptable and independent tasks Fixed-priority schedulers: RM, DM Dynamic-priority schedulers: EDF, LL COSC-4301-01, Lecture 3 1-2

  3. Overview of this lecture • Sporadic tasks • Scheduling nonpreemptable tasks • Scheduling nonpreemptable sporadic tasks • Scheduling nonpreemptable tasks with precedence constraints • Communicating periodic tasks: deterministic rendezvous model • Chapter 3 of [Cheng; 2002], Sections 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5 COSC-4301-01, Lecture 3

  4. Sporadic tasks • Sporadic tasks may be released at any time instant, but a minimum separation exists between releases of consecutive instances of the same sporadic task. • Can be modeled by periodic tasks with periods equal to the minimum separation. • In this way, we can re-use the previous scheduling algorithms. COSC-4301-01, Lecture 3

  5. Sporadic tasks (comparison with periodic tasks) • Unlike periodic tasks, sporadic tasks are released irregularly or may not be released at all. • So, even though the scheduler allocates a time slice to the periodic equivalent of a sporadic task, this sporadic task may not actually be released. • When this sporadic task does request service, it immediately runs if its release time is within its corresponding scheduled time slice. • Otherwise, it waits for the next scheduled time slice for running its periodic equivalent. COSC-4301-01, Lecture 3

  6. Example 1 • Two periodic tasks J1 and J2 arriving at time 0, and one sporadic task J3 with minimum separation time of 60. • J1: c1 = 10, p1 = 50 • J2: c2 = 15, p2 = 80 • J3: c3 = 20, p3 = 60 • A RM schedule is given below: • Figure 3.9 from [Cheng; 2005], page 56 COSC-4301-01, Lecture 3

  7. The second approach • Treat sporadic tasks as one periodic task JS with the highest priority and a period chosen to accommodate the minimum separations and computation requirements of this collection of sporadic tasks. • Any sporadic task may run within the time slices assigned to JS while the other periodic tasks run outside of these time slices. • Example: JS: cS = 20, pS = 60 • Figure 3.10 from [Cheng; 2005], page 57 COSC-4301-01, Lecture 3

  8. The third approach • Deferred server (DS) [Lehosczky, Sha, Strosnider; 1987] • DS is similar to second approach, with the modification: • The periodic task corresponding to the collection of sporadic tasks is the deferred server. • When no sporadic task waits for service during a time slice assigned to sporadic tasks, the processor runs the other (periodic) tasks. • If a sporadic task is released, then the processor preempts the currently running task and runs the sporadic task for a time interval up to the total time slice assigned to sporadic tasks. COSC-4301-01, Lecture 3

  9. Deferred server technique. Example • JS: cS = 20, pS = 60 • Deferred server scheduler allocates 20 time units to sporadic tasks every 60 time units. • Suppose a sporadic task J1 arrives with c1 = 30 at time 20. • Since only 20 time units are available in each period of 60 time units, it is scheduled to run from time 20 and preempted at time 40 (periodic tasks may run for the rest of 20 units). • Then at time 60 (the start of second period of 60 time units), J1 is scheduled to run for the other remaining 10 time units. COSC-4301-01, Lecture 3

  10. Deferred server technique. Example (cont) • Suppose a sporadic task J2 arrives with c2= 50 at time 100 (because J1 is also a sporadic task that uses 10 time units from the time of 20 units allocated to 60-120). • Since only 10 time units are still available in the second period of 60 time units, it is immediately scheduled to run (for period 60-120, only 20 units are deferred to sporadic tasks). • At time 110, it is preempted (other periodic tasks may run). • At time 120, J2 runs for 20 time units, and at time 180, J2 runs for the remaining 20 time units. • Figure 3.11 from [Cheng; 2005], page 57 COSC-4301-01, Lecture 3

  11. Analytical approach of sporadic tasks • For a deferred server with an arbitrary priority in a system of tasks scheduled using RM algorithm, no schedulable utilization is known that guarantees a feasible scheduling of the system. • However, for the special case in which DS has the shortest period among all tasks (that is, DS has the highest priority), a schedulable utilization exists: Schedulability Test 7 [Lehosczky, Sha, Strosnider; 1987], [Strosnider, Lehosczky; 1995] COSC-4301-01, Lecture 3

  12. Schedulability test 7 • Let pS and cS be the period and allocated time for the DS. • Let US=cS/pS be the utilization of the server. • A set of n independent, preemptable, and sporadic tasks with relative deadlines the same as the corresponding periods on a uniprocessor such that pS < p1 < …< pn < 2pS and pn > pS + cS, is RM-schedulable if the total utilization of this task set is at most U(n) = (n-1) [((US+2)/(US+1))(1/(n-1))-1]. COSC-4301-01, Lecture 3

  13. Schedulability test 7. Example 1 • J1: c1 = 10, p1 = 50 • J2: c2 = 15, p2 = 70 • JS: cS = 10, pS = 40 • US=cS/pS = 0.25 • pS < p1 < …< pn < 2pS holds (n=2). • pn > pS + cS holds. • U(n) = (n-1) [((US+2)/(US+1))(1/(n-1))-1]=0.80 • The total utilization of this task set is U=0.664, that is less than U(n). • According to Schedulability test 7, the above task set is RM-schedulable. COSC-4301-01, Lecture 3

  14. Schedulability test 7. Example 2 • Let us consider the following preemptable task set (JS is a sporadic task): • J1: S1 = 0, c1 = 1, p1 = d1 = 5 • J2: S2 = 0, c2 = 2, p2 = d2 = 7 • JS: SS = 0, cS = 1, pS = dS = 4 • compute the utilization rate. • check the applicability of RM-scheduling method based on schedulability test 7. • run the RM-scheduler for the above task set. COSC-4301-01, Lecture 3

  15. Schedulability test 7. Example 2 • U = 0.735… • According to schedulability test 7, the following conditions should be hold: • ps<p1<p2<…< pn< 2ps translates to 4 < 5 < 7 < … < 8 (TRUE) • pn> ps+ cs translates to 7 > 4 + 1 (TRUE) • Also, U(n) = (n-1) [((US+2)/(US+1))(1/(n-1))-1], where US=1/4=0.25. • Therefore U(n) = 0.80, so U < U(n) (TRUE) • Hence, schedulability test 7 can be applied to the above task set. COSC-4301-01, Lecture 3

  16. J1 J2 Js 5 0 10 15 20 25 30 35 Schedulability test 7. Example 2 • The RM schedule is: COSC-4301-01, Lecture 3

  17. Scheduling non-preemptable tasks • So far we have assumed that tasks can be preempted at any integer time instants. • In practice, tasks may contain critical sections that cannot be interrupted. • Examples of critical sections: • Access and modify shared variables; • Use shared resources (e.g., disks, printer). • An important goal is to reduce task waiting time and context-switching time – [Lee, Cheng; 1994]. COSC-4301-01, Lecture 3

  18. Scheduling non-preemptable tasks • Using fixed-priority schedulers for non-real-time tasks may potentially lead to the priority inversion problem [Sha, Rajkumar, Lehoczky; 1990] which occurs when a low-priority task with a critical section blocks a higher-priority task for an unbounded or long period of time. • The EDF and LL algorithms are no longer optimal if the tasks are not preemptable. • However, if all the tasks start at time 0, then EDF technique is still optimal. • So, an EDF algorithm may fail to meet a deadline of a task set S even if another scheduler can produce a feasible schedule for S (if starting time is not 0). COSC-4301-01, Lecture 3

  19. Optimality for non-preemptable tasks • Without preemption, we cannot transform a non-EDF schedule into an EDF schedule by interchanging computation blocks of different tasks. • No priority-based scheduling algorithm is optimal for non-preemptable tasks with arbitrary start times, computation times, and deadlines, even on a uniprocessor [Mok; 1984]. COSC-4301-01, Lecture 3

  20. Scheduling non-preemptive task sets • When preemption is not allowed, the scheduling problem is known to be NP-complete. • However, if only non-idling schedulers are considered (all tasks have their start times = 0), the problem is again tractable: • (Theorem 3.4, page 30 from [Stankovic et. Al., 1997]) Non-preemptive non-idling EDF is optimal. COSC-4301-01, Lecture 3

  21. Scheduling non-preemptable sporadic tasks • We transform the sporadic tasks into equivalent periodic tasks. • A sporadic task (c, d, p) can be transformed into and scheduled as a periodic task (c’, d’, p’) if the following conditions hold: • d ≥ d’ ≥ c • c’ = c • p’ ≤ d – d’ + 1 • A proof can be found in [Mok; 1984]. COSC-4301-01, Lecture 3

  22. Schedulability test 8 [Mok; 1984] • Let M = MP MS be the set of all non-preemptable tasks, where MP is the set of periodic tasks, and MS is the set of sporadic tasks. • Let the nominal (initial) laxity li of task Ti be di -ci. • Each sporadic task Ti be (ci, di, pi) is replaced by an equivalent periodic task T’i = (c’i, d’i, p’i) as follows: • c’i = ci, d’i = ci, p’i = min(pi, li+1). • If we can find a feasible schedule for the resulting set M’ of periodic tasks, we can schedule M without knowing in advance the start (release or request) time of the sporadic tasks in MS. COSC-4301-01, Lecture 3

  23. Schedulability test 8. Example • Let us consider the following non-preemptable task set (JS is a sporadic task): • J1: S1 = 0, c1 = 2, p1 = d1 = 5 • J2: S2 = 0, c2 = 2, p2 = d2 = 8 • JS: SS = 0, cS = 2, pS = dS = 8 • compute the utilization rate. • check the applicability of RM-scheduling method based on schedulability test 8. COSC-4301-01, Lecture 3

  24. Schedulability test 8. Example • U = 0.9 • The laxity: l = ds - cs = 8 – 2 = 6. • Now, replacing the sporadic task Js = (cs, ds, ps) • with equivalent periodic task Js’ = (cs’, ds’, ps’), • where cs’= cs=2, ds’= cs=2, ps’=min(ps, ls + 1) = min(8, 6+1) =7. • So, the new sporadic task as an equivalent periodic task is JS’: SS’ = 0, cS’ = 2, pS’=7, dS’= 2. COSC-4301-01, Lecture 3

  25. Schedulability test 8. Example • Let us check the condition for schedulability test 8. • d>=d’ means 8>=2 (TRUE) • c’=c means 2=2 (TRUE) • p’ <= d-d’+1 means 7 <= 8-2+1 (TRUE) • Since all the conditions hold true, schedulability test 8 can be applied to the above task set. COSC-4301-01, Lecture 3

  26. Scheduling non-preemptable tasks with precedence constraints • We introduce precedence and mutual exclusion (nonpreemption) constraints to the scheduling problem for single-instance tasks (tasks that are neither periodic nor sporadic) on a uniprocessor. • A task precedence graph shows the required order of execution of a set of tasks. • A node represents a task (e.g., Ti) and directed edges (e.g., ) indicate the precedence relationships between tasks. • An edge TiTj means that Ti must complete before Tj can start. • For a task Ti, • incoming edges from predecessor tasks indicate all these predecessor tasks have to complete execution before Ti can start execution; • outgoing edges to successor tasks indicate that Ti must finish execution before the successor tasks can start execution. COSC-4301-01, Lecture 3

  27. Scheduling algorithm A for tasks with precedence constraints • A topological ordering of the tasks in a precedence graph shows one allowable execution order of these tasks. • Suppose we have a set of n one-instance tasks with deadlines, all ready at time 0 and with precedence constraints described by a precedence graph. • Algorithm A: • Sort the tasks in the precedence graph in topological order (so tasks with no in-edges come first). If two or more tasks can be listed next, select the one with the earliest deadline, ties are broken arbitrarily; • Execute tasks one at a time following this topological order. COSC-4301-01, Lecture 3

  28. Algorithm A. Example 1 • Consider the set of tasks {T1, T2, T3, T4, T5, T6} with the following: • precedence constraints: T1 T2, T1  T3, T2  T4, T2  T6, T3  T5, T3  T4, T4  T6 • Computation times and deadlines: • T1: c1 = 2, d1 = 4, • T2: c2 = 3, d2 = 7, • T3: c3 = 2, d3 = 10, • T4: c4 = 8, d4 = 18, • T5: c5 = 6, d5 = 24, • T6: c6 = 4, d6 = 28. COSC-4301-01, Lecture 3

  29. Algorithm A. Example 1 (cont) • The precedence graph is: • Figure 3.14 from [Cheng; 2005], page 60 • The EDF-schedule for tasks with precedence constraints is: • Figure 3.16 from [Cheng; 2005], page 61 COSC-4301-01, Lecture 3

  30. Algorithm B • Is a variation of Algorithm A by: • Considering the task with the latest deadline first • Shifting the entire schedule toward 0. • Algorithm B: • Sort the tasks according to their deadlines in non-decreasing order and label the tasks such that d1≤ d2 ≤ … ≤ dn. • Schedule task Tn in the time interval [dn – cn, dn] • While there is a task to be scheduled do • Suppose S is the set of all unscheduled tasks whose successors have been scheduled. Schedule as late as possible the task with the latest deadline in S. • Shift the tasks toward time 0 while maintaining the execution order indicated at step 3. COSC-4301-01, Lecture 3

  31. Refinement of Algorithm B • Given a task set T={T1, …, Tn}, let us denote by seti the “starting execution time” for Ti: task Ti executes in time interval [seti, seti + ci], where ci is its computation time; • Sort the tasks according to their deadlines in non-decreasing order and label the tasks such that d1≤ d2 ≤ … ≤ dn. • setn= dn – cn; // Schedule Tn in the time interval [dn – cn, dn] • for (i = n – 1; i > 0; i--) seti = min{di – ci, seti+1 – ci}; • shift = set1; • for (i = 1; i <= n; i++) { • old_seti = seti; • seti = seti – shift; // Schedule Tn in the interval [seti, seti+ci] • if (i < n) shift = shift + seti+1 – (old_seti + ci); • } COSC-4301-01, Lecture 3

  32. Algorithm B. Example 1 • We re-consider the same example from slide 19, so the first three steps of Algorithm B gives: • Figure 3.15 from [Cheng; 2005], page 61 • The schedule for tasks with precedence constraints after shifting tasks is: • Figure 3.16 from [Cheng; 2005], page 61 COSC-4301-01, Lecture 3

  33. Algorithms A, B. Example 2 • Let us consider the set of non-preemptable tasks T = {J1, J2, J3, J4} with the following: • precedence constraints: J1 J2, J1 J3, J2 J4, J3 J4 • computation times and deadlines: • J1: c1 = 2, d1 = 5, • J2: c2 = 3, d2 = 7, • J3: c3 = 2, d3 = 10, • J4: c4 = 8, d4 = 18, • compute the task precedence graph. • investigate the applicability of Algorithms A and B to T. COSC-4301-01, Lecture 3

  34. Algorithms A, B. Example 2 • The task precedence graph is: COSC-4301-01, Lecture 3

  35. Algorithm A. Example 2 • The EDF schedule for this task set with the precedence constraints: COSC-4301-01, Lecture 3

  36. Algorithm B. Example 2 • According to Algorithm B, the tasks are sorted in non decreasing order and tasks are labeled such that d1 < d2 < … < dn and tasks are scheduled in interval [dn-cn]. • Running the refinement of Algorithm B, we get: • set4 = 10, set3 = 8, set2 = 4, set1 = 2 • shift = 2, set1 = 0, • shift = 2, set2 = 2, • shift = 3, set3 = 5, • shift = 3, set4 = 7. COSC-4301-01, Lecture 3

  37. Communicating periodic tasks: deterministic rendezvous model • Allowing tasks to communicate each other complicates the scheduling problem. • Interprocess communication leads to precedence constraints not only between tasks but also between blocks within these tasks. • Example: • Ada programming language provides the primitive rendezvous() to allow one task to communicate with another at a specific point during the task execution. • Ada is used in the implementation of a variety of embedded and real-time systems, including airplane avionics. COSC-4301-01, Lecture 3

  38. Deterministic rendezvous model • If a task A wants to communicate with task B, then task A executes rendezvous(B). Task A then waits until task B executes a corresponding rendezvous(A). • As a result, this pair of rendezvous() imposes a precedence constraint between the computations of tasks A and B by requiring all the computations prior to the rendezvous() primitive in each task be completed before the computations prior to the rendezvous() primitive in the other task can start. • To simplify the scheduling strategy, we consider the execution time of a rendezvous() primitive is zero or that its execution time is included in the preceding computation block. COSC-4301-01, Lecture 3

  39. Deterministic rendezvous model (cont) • A one-instance task can rendezvous with another one-instance task. • However, it is semantically incorrect to allow a periodic task and a sporadic task to rendezvous with each other since the sporadic task may not run at all, causing the periodic task to wait forever for the matching rendezvous. • Two periodic tasks may rendezvous with each other, but there are constraints on the lengths of their periods to ensure correctness. COSC-4301-01, Lecture 3

  40. Deterministic rendezvous model (cont) • Two tasks are compatible if their periods are multiples of each other. • To allow two (periodic) tasks to communicate in any form, they must be compatible. • One attempt to schedule compatible and communicating tasks is to use the EDF scheduler to execute the ready task with the nearest deadline that is not blocked due to a rendezvous. • The solution [Mok; 1983] to this scheduling problem starts by building a database for the runtime scheduler so that EDF algorithm can be used with dynamically assigned task deadline. COSC-4301-01, Lecture 3

  41. Deterministic rendezvous model (cont) • Let L be the longest period. • Since the communicating tasks are compatible, L is the same as the LCM (Lowest Common Multiple) of these tasks’ periods. • We denote a chain of scheduling blocks generated in chronological order for task Ti in interval (0, L) by Ti (1), Ti (2), …, Ti (mi). COSC-4301-01, Lecture 3

  42. Deterministic rendezvous model (cont) • If there is a rendezvous constraint with Titargeting Tj between Ti (k) and Ti (k + 1), Tj (l) and Tj (l + 1), • then the following precedence relations are specified: Ti (k) Tj (l + 1), Tj (l) Ti (k + 1), • Within each task, the precedence constraints between blocks are: Ti (1)  Ti (2)  … Ti (mi). COSC-4301-01, Lecture 3

  43. Deterministic rendezvous model. Example • T1: c1 = 1, d1 = p1 = 12. • T2: c2, 1 = 1, c2, 2 = 2, d2 = 5, p2 = 6. • T3: c3, 1 = 2, c3, 2 = 3, d3 = p3 = 12. • T2 must rendezvouz with T3 after the first scheduling block. • T3 must rendezvouz with T2 after the first and second scheduling blocks. COSC-4301-01, Lecture 3

  44. Deterministic rendezvous model. Example’s solution • L = 12 • The precedence constraints between blocks are: • T2(1)  T2(2)  T2(3)  T2(4) • T3(1)  T3(2) • Rendezvous constraints with T2(1)targeting T3(1): T2(1)  T3(2), T3(1)  T2(2) • Rendezvous constraints with T2(2)targeting T3(2): T2(2)  T3(3), T3(2)  T2(3) COSC-4301-01, Lecture 3

  45. Deterministic rendezvous model. Example’s solution • The EDF scheduler will provide the schedule (the first and second occurrences of T2,1 stands for T2(1) and T2(3), respectively, etc): • Figure 3.18 from [Cheng; 2005], page 63 • The solution is infeasible because T2(2) should be before 5 time units, and T2(4) should be before 5+6=11 time units. COSC-4301-01, Lecture 3

  46. Deterministic rendezvous model (cont) • Algorithm for revising the deadlines to fix the previous infeasibility decision: • Sort the scheduling blocks in [0, L] in reverse topological order, so the block with the latest deadline appears first. • Initialize the deadline of the k-th instance of Ti,j with (k-1)pi+di. • Let S and S’ be scheduling tasks. Then revise dS = min(dS , {dS’ - cS’ : S S’}). • Use the EDF scheduler to schedule the blocks with the revised deadlines. COSC-4301-01, Lecture 3

  47. Deterministic rendezvous model. Example’s second solution • We consider the previous example. • The initial deadlines for T1, T3,1, T3,2, T2,3, T2,4, T2,1, T2,2 are 12, 12, 12, 11, 11, 5, and 5, respectively (in topological order). • Since T3(1)  T2(2) is a rendezvous constraint, then deadline of T3,1 is revised to 5-2=3. • Similarly, since T3(2)  T2(3) is a rendezvous constraint, then deadline of T3,2 is revised to 11-1=10. COSC-4301-01, Lecture 3

  48. Deterministic rendezvous model. Example’s second solution • The revised deadlines for T1, T3,1, T3,2, T2,3, T2,4, T2,1, T2,2 are now 12, 3, 10, 9, 11, 3, and 5, respectively. • Using again the EDF scheduler, we get the following feasible schedule: • Figure 3.19 from [Cheng; 2005], page 64 COSC-4301-01, Lecture 3

  49. Summary • Sporadic tasks • Scheduling nonpreemptable tasks • Scheduling nonpreemptable sporadic tasks • Scheduling nonpreemptable tasks with precedence constraints • Communicating periodic tasks: deterministic rendezvous model COSC-4301-01, Lecture 3

  50. Reading suggestions • Chapter 3 of [Cheng; 2002], Sections 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5 • Chapters 3, 10 and 11 of [Kopetz; 1997] • Chapter 2 of [Stankovic, Spuri, Ramamritham, Buttazzo; 1998] COSC-4301-01, Lecture 3

More Related