1 / 57

G572 Real-time Systems

G572 Real-time Systems. Rate Monotonic Analysis Xinrong Zhou (xzhou@abo.fi). RMA. RMA is one quantitative method which makes it possible to analyze if the system can be scheduled With the help of RMA it is possible to: select the best task priority allocation

grayp
Télécharger la présentation

G572 Real-time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. G572 Real-time Systems Rate Monotonic Analysis Xinrong Zhou (xzhou@abo.fi)

  2. RMA • RMA is one quantitative method which makes it possible to analyze if the system can be scheduled • With the help of RMA it is possible to: • select the best task priority allocation • select the best scheduling protocol • Select the best implementation scheme for aperiodic reality • If RMA rules are followed mechanically, the optimal implementation (where all hard deadlines are met) is reached with the highest possibility

  3. RMA • System model • scheduler selects tasks by priority • if a task with higher priority comes available, task currently running will be interrupt and the task with higher priority will be run (preemption) • Rate Monotonic:priority controls • monotonic, • frequency (rate) of a periodic process

  4. Contents • Under what conditions a system can be scheduled when priorities are allocated by RMA? • Periodic tasks • Blocking • Random deadlines • Aperiodic activities

  5. Why are deadlines missed? • Two factors: • The amount of computation capacity that is globally available, and • how this is used • Factor 1 can be easily quantified • MIPS • Max bandwidth of LAN • Factor 2 contains • Processing capacity that is used by the operating system • Distribution between tasks • RMA goals: • Optimize distribution in such a way that deadlines will be met; • or ”provide for graceful degradation”

  6. Utilization • Task Ti is dependent on: • Preemption of tasks with higher priority • describes relative usability grade of tasks with higher priority • Its own running time • ci • blocking by tasks with lower priority • Lower priority tasks contain critical resources

  7. Example Total utilization U = 88,39% 23 10 20 Missed deadline

  8. Harmonic period helps • Application is harmonic if every task’s period is exact portion of a longer period Total utilization U = 88,32% 10 20 30 36

  9. Liu & Layland • Suppose that: • Every task can be interrupted (preemption) • Tasks can be ordered by inverted period: • pr(Ti) < pr(Tj) iff Pj<Pi • Every task is independent and periodic • Tasks relative deadlines are similar to tasks’ periods • Only one iteration of all tasks is active at one time!

  10. Liu & Layland • When a group of tasks can be scheduled by RMA? • RMA 1 • If total utilization of tasks tasks can be scheduled so that every deadline will be met e.g. When the number of tasks increases, processor will be idling 32% of time!

  11. Variation of U(n) by n

  12. Example • In example below, we get utilization6/15+4/20+5/30 = 0,767 • because U(3) = 0,780, application can be scheduled

  13. 10 0 5 non RMA RMA Why shorter periods are prioritized? In realistic application. c is small compared with T. Shortest T first ensures that negative preemption effects will be minimized T2 T1 T1 misses deadline

  14. Why shorter periods are prioritized? • Slack: T-c = S • In example is c2 > T1 – c1 • In practice is c<< T e.g. slack is proportional to the period • By selecting the shortest period first, the preemption effect will be minimized • NOTE: • We are primarly interesting in shortening deadlines by priorities!

  15. How 100% utilization can be achieved? • Critical zone theorem: • If some number of independent periodic tasks start synchronously and every task meets its first deadline, then every future deadline will be met • Worst-case: task starts at the same time with higher-priority task • Scheduling points for task Ti: Ti’s first deadline, and the end of periods within Ti’s first deadline for every task with higher priority than Ti • If we can show that there are at least one scheduling point for one task, then if that task have time to run one time, this task can be scheduled

  16. Overhead • Periodic overhead • overhead to make tasks periodic • overhead to switch between tasks • System overhead • overhead of operating system • UNIX, Windows NT: difficult to know what happens in background

  17. Periodic Overhead • To execute a task periodically, the clock must be read and the next execution time must be calculated Next_Time = Clock; loop Next_Time = Next_Time + Period; { ... periodic task code here ... } delay Next_Time – Clock; end loop • task switch: • saving and recalling tasks ”context” • add two task switches to the running queue

  18. Periodic Overhead • Data can be gathered by benchmarking • for eCOS operating system with: • Board: ARM AEB-1 Evaluation Board • CPU : Sharp LH77790A 24MHz

  19. Bad points of ”classic” RMA • It requires preemptive scheduler • blocking can stop the system • aperiodic activities must be ”normalized” • tasks which have jitter must be specially handled • RMA can’t analyze multiprocessor systems

  20. Blocking • If two tasks share the same resource, those tasks are dependent • Resources are serially and mutually exclusive used • Shared resources are often implemented using semaphores (mutex): • 2 operations • get • release • Blocking causes two problems: • priority inversion • deadlocks

  21. R reserved T1 Terminates Allocates R T2 Release R Terminates Interrupt T3 Blocking time Interrupt Try to allocate R Interrupt and allocates R Priority inversion • Priority inversion happens when a task with a lower priority blocks a task with a higher priority • e.g. Mars Rover was lost because of priority inversion Release R

  22. Deadlock • Deadlock means that there are circular resource allocation: • T1 allocates R1 • T2 interrupt T1 • T2 allocates R2 • T2 try to allocate R1 but blocks  T1 will be run • T1 try to allocate R2 but blocks • Both tasks are now blocked • deadlock

  23. Deadlock • Deadlocks are always design faults • Deadlocks can be avoided by • Special resource allocation algorithms • deadlock-detection algoritms • static analysis of the system • Resource allocation graphs • Matrix method • Deadlocks will be discussed more thoroughly with parallel programming

  24. Priority inversion: control of blocking time • It is difficult to avoid priority inversion when blocking happens • Scheduling using fixed priorities produces unlimited blocking time • 3 algorithms for minimization of blocking time: • PIP: Priority inheritance • PCP: Prioirity ceiling • HL: Highest locker

  25. Priority Inheritance • 3 rules: • Blocked task inherits a temporary priority from the blocking task with a higher priority (priority-inheritance rule) • Task can lock a resource only if no other task has locked the resource. Task blocks until resources are released, after which task will continue running on its original priority (allocation rule) • Inheritance is transitive: • T1 blocks T2, and T2 blocks T3 • T1 inherits T3’s priority

  26. Priority inheritance In the example Alowest priority Chighest priority

  27. Priority inheritance • Blocking time will be now shorter • Maximum blocking time for a task • Length of min(m,n) critical section • m = number of critical sections in application • n = number of tasks with higher priority • ”chain blocking” • PIP can’t prevent deadlock in all cases • priority ceiling algorithm can prevent deadlock and reduce the blocking time

  28. Priority Inheritance

  29. Priority Ceiling • Every resource has the highest priority level (”ceiling”): • Highest priority level of task which can lock (or require) the resource • Highest ceiling value (currently used) is stored in a system variable called system ceiling • A task can lock or release a resource if • It already has locked a resource which has ceiling=system ceiling, or • Task’s priority is higher than system ceiling • A task blocks until resource will be available and system ceiling variable decrements • Blocking task inherits its priority from that blocking task which has highest priority • Inheritance is transitive

  30. Priority Ceiling R1 ceiling = Prio B R2 ceiling = Prio C system ceiling = R1 ceiling

  31. Priority Ceiling • blocking time forc is now 0 • no ”chain blocking” possible • deadlock not possible in any cases • Complicated implementation

  32. Highest locker • Every resource has highest priority level (”ceiling”): highest priority of a task which can lock the resource • Task can lock or release resource and inherit resources ceiling + 1 as its new priority level • Simpler than PCP to implement • In practice same properties than PCP • ”best” alternative

  33. Highest Locker

  34. Cost of implementation • PIP protocol • Task generates a context-switch when it blocks • Task generates a context-switch when it becomes executable • Scheduler must keep one queue for semaphores of tasks ordered by priorities • Every time new task needs a semaphore, scheduler must adjust tasks priority to the resource owner’s priority • Owner (task) must possible inherit the priority • Every times resource is released, disinheritance procedure is executed

  35. Costs of implementation • PIWG ( Performance Issues Working Group) has developed benchmarks for comparing scheduling algorithms

  36. Cost of implementation • Note: • If the protocol is not in operating system, it must not be implemented by itself. (application level=>huge overhead) • ”No silver bullet” • Scheduling protocol can only minimize the possibility of blocking • Blocking must be prevented by improved design

  37. Similarities of scheduling protocols

  38. Algorithms in commercial RTOS • Priority inheritance • WindRiver Tornado/VxWorks • LynxOs • OSE • eCOS • Priority Ceiling • OS-9 • pSOS+

  39. Deadline Monotonic Priority Assignment • RM: • Shortest period gets the highest priority • DM: • Shortest deadline gets the highest priority • DM gives higher utilization • Get first the analysis using RM, allocate then priorities using DM

  40. Dynamic scheduling • Priorities are calculated at run time • Earliest deadline first (EDF): • Task with shortest time to the deadline executed first • It is always possible to transform timing plan which meets deadlines to the timing plan under EDF • EDF is an optimal scheduling algorithm • It is not simple to show that the system is schedulable using EDF • RMA is enough in practice • tasks need not to be periodic!

  41. EDF: Example • When t=0 can only T1 run. • When t=4 has T2 higher priority because d2 < d1 • When t=5 has T2 higher priority than T3 • When t=7 stops T2 and T3 has higher priority • When t=17 stops T3 and T1 executes

  42. EDF: test of schedulability • In general case, we must simulate the system to show that it is schedulable • finity criterion gives limits to the simulation • Let hT(t) be the sum of running times of those tasks in unit T which have an absolute deadline

  43. Aperiodic activities • In practice not every event is periodic • Aperiodic activities are often I/O activities • Connected to the interrupt lines of the CPU • Processors’ interrupt has always higher priority than the scheduler of the operating system • How could the aperiodic events be connected that the system will be schedulable?

  44. Aperiodic activities • Look 5 different implementation models: • Interrupt handled at hardware priority level • Cyclic polling (handled at the os-schedulers priority level) • Combination of 1 and 2 (deferred aperiodic handling) • Interrupt handled partially at hardware level and partially at OS level with a deferred server • Interrupt handled partially at hardware level and partially OS level with a sporadic server

  45. Hardware handling • Interrupt handled wholy by interrupt service routine (ISR)

  46. Event period n period n+1 Processering 5 0 1 10 D Cyclic polling • One task allocate for handling of events • Hardware must buffer • Requirements: • Only one event should happens between two cycles • Event must be handled before its deadline • If polling task is not executed with the highest priority, its period should be highest half of the events deadline

  47. Cyclic polling

  48. Delayed handling • Partition handling to two parts: • ISR: handle those tasks that must be handled immediately: • Reading of hardware registers • .... • ”deferred handler” (DH) will be activated that handles the rest of task

  49. Deferred Server • Delayed handling is still aperiodic • Idea: make DH periodic • Server process • period • for prevention that server takes all processing capacity, server under period is given limited execution capacity • After server goes idle, the consumed processor capacity can be restored (replenishment period) • deferred server restores its processing capacity at the beginning of period

  50. Event A Event B Event C Event D 15 10 5 12 8 4 0 Deferred server • execution capacity 1 (one event/period) • period 4 • server is redo when t=0, but runs first when t=3 e.g. phase = 3 • RMA analysis needs that every task should be executable at the beginning of the cycle

More Related