1 / 76

RTOS Scheduling – I : Rate-Monotonic Theory

RTOS Scheduling – I : Rate-Monotonic Theory. EE202A (Fall 2001): Lecture #4. Reading List for This Lecture. Required

abena
Télécharger la présentation

RTOS Scheduling – I : Rate-Monotonic Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RTOS Scheduling – I :Rate-Monotonic Theory EE202A (Fall 2001): Lecture #4

  2. Reading List for This Lecture • Required • Balarin, F.; Lavagno, L.; Murthy, P.; Sangiovanni-Vincentelli, A.; Systems, C.D.; Sangiovanni-, A. Scheduling for embedded real-time systems. IEEE Design & Test of Computers, vol.15, (no.1), IEEE, Jan.-March 1998. p.71-82.http://ielimg.ihs.com/iel3/54/14269/00655185.pdf • Sha, L.; Rajkumar, R.; Sathaye, S.S. Generalized rate-monotonic scheduling theory: a framework for developing real-time systems. Proceedings of the IEEE, vol.82, (no.1), Jan. 1994. p.68-82. 28 references. http://ielimg.ihs.com/iel1/5/6554/00259427.pdf • Recommended • Sha, L.; Rajkumar, R.; Lehoczky, J.P. Priority inheritance protocols: an approach to real-time synchronization. IEEE Transactions on Computers, vol.39, (no.9), Sept. 1990. p.1175-85. http://ielimg.ihs.com/iel1/12/2066/00057058.pdf • Others • Liu, C.L.; Layland, J.W. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the Association for Computing Machinery, vol.20, (no.1), Jan. 1973. p.46-61. http://nesl.ee.ucla.edu/pw/ee202a/Liu73.pdf • Lehoczky, J.; Sha, L.; Ding, Y. The rate monotonic scheduling algorithm: exact characterization and average case behavior. Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), (Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), Santa Monica, CA, USA, 5-7 Dec. 1989.) Los Alamitos, CA, USA: IEEE Comput. Soc. Press, 1989. p.166-71. http://ielimg.ihs.com/iel2/268/2318/00063567.pdf

  3. Computation & Timing Modelof the System • Requests for tasks for which hard deadlines exist are periodic, with constant inter-request intervals • Deadlines consist of runnability constraints only • each task must finish before the next request for it • eliminates need for buffering to queue tasks • Tasks are independent • requests for a certain task does not depend on the initiation or completion of requests for other tasks • however, there periods may be related • Run-time for each task is constant for that task, and does not vary with time • can be interpreted as the maximum running time

  4. Characterizing the Task Set • Set on n independent tasks 1, 2, … n • Request periods are T1, T2, ... Tn • request rate of i is 1/Ti • Run-times are C1, C2, ... Cn

  5. Scheduling Algorithm • Set of rules to determine the task to be executed at a particular moment • One possibility: preemptive & priority driven • tasks are assigned priorities • statically or dynamically • at any instant, the highest priority task is run • whenever there is a request for a task that is of higher priority than the one currently being executed, the running task is interrupted, and the newly requested task is started • Therefore, scheduling algorithm == method to assign priorities

  6. Assigning Priorities to Tasks • Static or fixed approach • priorities are assigned to tasks once for all • Dynamic approach • priorities of tasks may change from request to request • Mixed approach • some tasks have fixed priorities, others don’t

  7. Deriving Optimum Priority Assignment Rule

  8. Critical Instant for a Task • Deadline for a task = time of next request of it • Overflow is said to occur at time t, if t is the deadline of an unfulfilled request • A scheduling algorithm is feasible if tasks can be scheduled so that no overflow ever occurs • Response time of a request of a certain task is the time span between the request and the end of response to that task

  9. Critical Instant for a Task (contd.) • Critical instant for a task = instant at which a request for that task will have the maximum response time • Critical time zone of a task = time interval between a critical instant & the end of the response to the corresponding request of the task

  10. When does Critical Instant occur for a task? • Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks • Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm • if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible

  11. t 1 2 3 4 5 t 1 2 3 4 5 Example • Consider 1 & 2 with T1=2, T2=5, & C1=1, C2=1 • 1 has higher priority than 2 • priority assignment is feasible • can increase C2 to 2 and still be able to schedule T1 T1 t 1 2 3 4 5 T2 T2 t 1 2 3 4 5 CRITICAL TIME ZONE CRITICAL TIME ZONE

  12. Example (contd.) • 2 has higher priority than 1 • priority assignment is still feasible • but, can ‘t increase beyond C1=1, C2=1 T2 t 1 2 3 4 5 T1 t 1 2 3 4 5 CRITICAL TIME ZONE

  13. Observation • Consider 1 & 2 with T1 < T2 • Let, 1 be the higher priority task. From Theorem 1, the following must hold:  T2/ T1  C1 + C2  T2(necessary condition, but not sufficient) • Let, 2 be the higher priority task. The following must hold: C1 + C2  T1

  14. Observation (contd.) • Note: C1 + C2  T1  T2/ T1  C1 +  T2/ T1  C2   T2/ T1  T1  T2/ T1  C1 +  T2/ T1  C2  T1  T2/ T1  C1 + C2  T1 since  T2/ T1   1 • Therefore, whenever T1 < T2 and C1, C2 are such that the task schedule is feasible with 2 at higher priority than 1, then it is also feasible with 1 at higher priority than 2 • but opposite is not true

  15. A Possible Rule for Priority Assignment • Assign priorities according to request rates, independent of run times • higher priorities for tasks with higher request rates • Called Rate-Monotonic (RM) Priority Assignment • It is optimum • Theorem 2: no other fixed priority assignment can schedule a task set if RM priority assignment can’t schedule it, i.e., if a feasible priority assignment exists then RM priority assignment is feasible • proof: can obtain a RM priority assignment by a sequence of pairwise reorderings of task priorities

  16. Processor Utilization • Processor Utilization Factor: fraction of processor time spent in executing the task set • i.e. 1 - fraction of time processor is idle • For n tasks, 1, 2, … n the utilization factor U isU = C1/T1 + C2/T2 + … + Cn/Tn • U can be improved by increasing Ci’s or decreasing Ti’s as long as tasks continue to satisfy their deadlines at their critical instants

  17. How Large can U be for a Fixed Priority Scheduling Algorithm? • Corresponding to a priority assignment, a set of tasks fully utilizes a processor if: • the priority assignment is feasible for the set • and, if an increase in the run time of any task in the set will make the priority assignment infeasible • The least upper boundof U is the minimum of the U’s over all task sets that fully utilize the processor • for all task sets whose U is below this bound,  a fixed priority assignment which is feasible • U above this bound can be achieved only if the task periods Ti’s are suitably related

  18. Utilization Factor for Rate-Monotonic Priority Assignment • RM priority assignment is optimal •  for a given task set,the U achieved by RM priority assignment is the U for any other priority assignment •  the least upper bound of U = the infimum of U’s for RM priority assignment over all possible T’s and all C’s for the tasks

  19. Two Tasks Case • Theorem 3: For a set of two tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=2(21/2-1) • Proof: • Let 1 and 2 be two tasks with periods T1 and T2, and run-times C1 and C2 • Assume T2 > T1 • According to RM, 1 has higher priority than 2 • In a critical time zone of 2, there are T2/T1 requests for 1

  20. Two Tasks Case (contd.) • Proof contd.: • Adjust C2 to fully utilize the available processor time within the critical time zone • Case 1: C1 is short enough so that all requests for 1 within the critical time zone of 2 are completed before the next request of 2, i.e. C1  T2-T1T2 /T1 • then, largest possible C2 = T2 - C1 T2/T1 • so that, U = 1 - C1((1/T2)T2/T1 - 1/T1 ) • U monotonically decreases with C1

  21. Two Tasks Case (contd.) • Proof contd.: • Case 2: The execution of the T2/T1 -th request for 1 overlaps the next request for 2, i.e. C1  T2-T1T2 /T1 • then, largest possible C2 = (T1 -C1) T2 /T1 • so that, U = (T1 / T2) T2 /T1+ C1((1/T1)-(1/ T2) T2 /T1) • U monotonically increases with C1 • The minimum U occurs at the boundary of cases 1 & 2, i.e. for C1 =T2-T1T2 /T1 • U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1)

  22. Two Tasks Case (contd.) • Proof contd.: • U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1) = 1 - f(1-f)/(I+f)where I= T2 /T1 f= (T2/T1)- T2 /T1=fractional part of T2/T1 • U is monotonically increasing with I • min U occurs at min value of I, i.e. I=1 • Minimizing U over f, one gets f = 21/2-1 • So, min U = 2(21/2-1)  0.83 • Note: U=1 when f=1, i.e. when period of lower priority task is a multiple of the period of the higher priority task

  23. General Case • Theorem: For a set of n tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=n(21/n-1) • Or, equivalently, a set of n periodic tasks scheduled by RM algorithm will always meet their deadlines for all task start times if C1/T1 + C2/T2 + … + Cn/Tn n(21/n-1)

  24. General Case (contd.) • As m, the U rapidly converges to ln 2 = 0.69 • This is a rather tight bound • But, note that this is just the least upper bound • task set with larger U may still be schedulable • e.g., note that if {Tn/Ti} = 0 for I=1,2,…,n-1, then U=1 • How to check if a specific task set with n tasks is schedulable? • If U  n(21/n-1) then it is schedulable • otherwise, use Theorem 1!

  25. Theorem 1 Recalled • Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks • Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm • if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible • Applicable to any static priority scheme… not just RM

  26. Example #1 • Task 1 : C1 =20; T1 =100; D1 =100Task 2 : C2 =30; T2 =145; D2 =145Is this task set schedulable?U = 20/100 + 30/145 = 0.41  2(21/2-1) = 0.828Yes!

  27. Example #2 • Task 1 : C1 =20; T1 =100; D1 =100Task 2 : C2 =30; T2 =145; D2 =145Task 3 : C3 =68; T3 =150; D3 =150Is this task set schedulable?U = 20/100 + 30/145 + 68/150 = 0.86 > 3(21/3-1) = 0.779Can’t say! Need to apply Theorem 1.

  28. Example #2 (contd.) • Consider the critical instant of 3, the lowest priority task • 1 and 2 must execute at least once before 3 can begin executing • therefore, completion time of 3 is  C1 +C2 +C3 = 20+68+30 = 118 • however, 1 is initiated one additional time in (0,118) • taking this into consideration, completion timeof 3 = 2 C1 +C2 +C3 = 2*20+68+30 = 138 • Since 138 < D3 = 150, the task set is schedulable

  29. Response Time Analysis for RM • For the highest priority task, worst case response time R is its own computation time C • R = C • Other lower priority tasks suffer interferences from higher priority processes • Ri = Ci + Ii • Ii is the interference in the interval [t, t+Ri]

  30. Response Time Analysis (contd.) • Consider task i, and a higher priority task j • Interference from task j during Ri: • # of releases of task k = Ri/Tj • each will consume Cj units of processor • total interference from task j = Ri/Tj * Cj • Let hp(i) be the set of tasks with priorities higher than that of task i • Total interference to task i from all tasks during Ri:

  31. Response Time Analysis (contd.) • This leads to: • Smallest Ri will be the worst case response time • Fixed point equation: can be solved iteratively

  32. Algorithm

  33. RM Schedulability • Consider tasks 1, 2, … n in decreasing order of priority • For task i to be schedulable, a necessary and sufficient condition is that we can find some t  [0,Ti] satisfying the conditiont = t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci • But do we need to check at exhaustively for all values of t in [0,Ti]?

  34. RM Schedulability (contd.) • Observation: RHS of the equation jumps only at multiples of T1, T2, … Ti-1 • It is therefore sufficient to check if the inequality is satisfied for some t  [0,Ti] that is a multiple of one or more of T1, T2, … Ti-1t t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci

  35. RM Schedulability (contd.) • Notation Wi(t) = j=1..iCjt/Tj Li(t) = Wi(t)/t Li = min0  t  TiLi(t) L = max{Li} • General sufficient & necessary condition: • Task i can be scheduled iff Li 1 • Practically, we only need to compute Wi(t) at all timesi = {kTj | j=1,…,I; k=1,…,Tj/Tj} • these are the times at which tasks are released • Wi(t) is constant at other times • Practical RM schedulability conditions: • if minti Wi(t)/t  1, task i is schedulable • if maxi{1,…,n}{minti Wi(t)/t}  1, then the entire set is schedulable

  36. Example • Task set: • 1: T1=100, C1=20 • 2: T2=150, C2=30 • 3: T3=210, C3=80 • 4: T4=400, C4=100 • Then: • 1 = {100} • 2 = {100,150} • 3 = {100,150,200,210} • 4 = {100,150,200,210,300,400} • Plots of Wi(t): task i is RM-schedulable iff any part of the plot of Wi(t) falls on or below the Wi(t)=t line.

  37. Deadline Monotonic Priority Assignment (DMP) • Fixed priority of a process is inversely proportional to its deadline (< period) Di < Dj Pi > Pj • Optimal: can schedule any task set that any other static priority assignment can • Example: RM fails but DMP succeeds for the following

  38. RM in Distributed/Networked Embedded Systems? • Task is essentially scheduled on multiple resources in series • Need to schedule communication messages over the interconnect • propagation delay & jitter • queuing delay & jitter • Divide end-to-end deadline into subsystem deadlines • Need buffering to mitigate jitter problem as task may arrive too early

  39. Can one do better? • Yes… by using dynamic priority assignment • In fact, there is a scheme for dynamic priority assignment for which the least upper bound on the processor utilization is 1 • More later...

  40. Transient Overload • Not RM schedulable if all tasks take their worst case execution times, but is schedulable in the average case • Can we arrange so that the three critical tasks always meet their deadlines, while the non-critical task meets many of its deadlines?

  41. Dealing with Transient Overload • Transient system overload may cause some deadlines to be missed • Lower priority tasks are likely to miss their deadlines first in an overload situation • But, the more important task may have been assigned a lower priority: priority != importance • One could assign priorities according to importance • e.g. by artificially reducing smaller deadline • but… reduces schedulablity

  42. Example • Consider two tasks: Task 1: C1 =3.5; T1 =10; D1 =10; less importantTask 2: C2 =7; T2 =14; D2 =13; critical task • 2 will have lower priority • completion time test shows that 2 is not schedulable • but is important and must be guaranteed! • Making priority 2 of 1 will make unschedulable

  43. A Better Approach: Period Transformation • One could transform period of 2 to 7, yielding a modified task setTask 1: C1 =3.5; T1 =10; D1 =10Task 2a: C2a=3.5; T2a=7; D2a=6 • Note: in period transformation, the real deadline is at the last transformed period • deadline at the second transformed period of 2a is at most 6 (7+6=13) • Now, 2a has higher priority, and the task set is schedulable!

  44. Using Period Transformation to Improve Schedulability • Consider two tasks: Task 1: C1=5; T1=10Task 2: C2=5; T2=15 • These two tasks are just schedulable with utilization 83.3% • If we transform, Task 1a: C1a=2.5; T1a=5Task 2: C2=5; T2=15the utilization bound becomes 100%

  45. Sporadic Tasks • Tasks that are released irregularly, often in response to some event in the environment • no periods associated • but must have some maximum release rate (minimum inter-arrival time) • Otherwise no limit on workload! • How to deal with them? • consider them as periodic with a period equal to the minimum inter-arrival time • other approaches…

  46. Handling Sporadic Tasks: Approach 1 • Define fictitious periodic task of highest priority and of some chosen execution period • During the time this task is scheduled to run, the processor can run any sporadic task that is awaiting service • if no sporadic task awaiting service, processor is idle • Outside this time the processor attends to periodic tasks • Problem: wasteful!

  47. Handling Sporadic Tasks: Approach 2 (Deferred Server) • Less wasteful… • Whenever the processor is scheduled to run sporadic tasks, and finds no such tasks awaiting service, it starts executing other (periodic) tasks in order of priority • However, if a sporadic task arrives, it preempts the periodic task and can occupy a total time up to the time allotted for sporadic task • Schedulability?

  48. Example Approach 1 Approach 2 (Deferred Server)

  49. Task Synchronization • So far, we considered independent tasks • However, tasks do interact: semaphores, locks, monitors, rendezvous etc. • shared data, use of non-preemptable resources • This jeopardizes systems ability to meet timing constraints • e.g. may lead to an indefinite period of priority inversion where a high priority task is prevented from executing by a low priority task

  50. Priority Inversion Example • Let 1 & 3 share a resource & let 1 have a higher priority. Let 2 be an intermediate priority task that does not share any resource with either. Consider: • 3 obtains a lock on the semaphore S and enters its critical section to use a shared resource • 1 becomes ready to run and preempts . Next 1 tries to enter its critical section by trying to lock S. But S is locked and therefore 1 is blocked. • 2 becomes ready to run. Since only 2 and 3 are ready to run, 2 preempts 3 while 3 is in critical section.

More Related