1 / 33

Real Time Scheduling

Real Time Scheduling. Terminologies define Fixed Priority Scheduler Ref: Real Time System Chapter 2 & 3. Jobs and Tasks. A job is a unit of work that is scheduled and executed by a system e.g. computation of a control-law, computation of an FFT on sensor data,

toki
Télécharger la présentation

Real Time Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real Time Scheduling Terminologies define Fixed Priority Scheduler Ref: Real Time System Chapter 2 & 3

  2. Jobs and Tasks • A jobis a unit of work that is scheduled and executed by a system • e.g. computation of a control-law, computation of an FFT on sensor data, • transmission of a data packet, retrieval of a file • A taskis a set of related jobs which jointly provide some function • e.g. the set of jobs that constitute the “maintain constant altitude” task, • keeping an airplane flying at a constant altitude . Your system should sample the temperature sensor reading every 1 ms.

  3. Execution Time • A job Ji will execute for time ei • This is the amount of time required to complete the execution of Ji when it executes alone and has all the resources it needs • Value of ei depends upon complexity of the job and speed of the processor on which it is scheduled; may change for a variety of reasons: • Conditional branches • Cache memories and/or pipelines • Compression (e.g. MPEG video frames) • Execution times fall into an interval [ei−, ei +]; assume that we know this interval for every hard real-time job, but not necessarily the actual ei • Terminology: (x, y] is an interval starting immediately after x, continuing up to and including • Often, we can validate a system using ei+ for each job; we assume ei= ei+ and ignore the interval lower bound • Inefficient, but safe bound on execution time

  4. Release and Response Time • Release time – the instant in time when a job becomes available for execution • May not be exact: Release time jitter so riis in the interval [ri−, ri+] • A job can be scheduled and executed at any time at, or after, its release time, provided its data and control dependency conditions are met • Response time – the length of time from the release time of the job to the time instant when it completes • Not the same as execution time, since may not execute continually

  5. Deadline and Timing Constraint • Completion time – the instant at which a job completes execution • Relative deadline – the maximum allowable response time of a job • Absolute deadline – the instant of time by which a job is required to be completed (often called simply the deadline) • absolute deadline = release time + relative deadline • Feasible interval for a job Jiis the interval (ri, di) • Deadlines are examples of timing constraints

  6. Example • A system to monitor and control a heating furnace • The system takes 20ms to initialize when turned on • After initialization, every 100 ms, the system: • Samples and reads the temperature sensor • Computes the control-law for the furnace to process temperature readings, determine the correct flow rates of fuel, air and coolant • Adjusts flow rates to match computed values • The periodic computations can be stated in terms of release times of the jobs computing the control-law: J0, J1, …, Jk, … • The release time of Jkis 20 + (k × 100) ms

  7. Example • Suppose each job must complete before the release of the next job: • Jk’s relative deadline is 100 ms • Jk’s absolute deadline is 20 + ((k + 1) × 100) ms • Alternatively, each control-law computation may be required to finish sooner – i.e. the relative deadline is smaller than the time between jobs, allowing some slack time for other jobs

  8. Hard vs Soft Real Time • Tardiness – how late a job completes relative to its deadline • lateness = completion time – absolute deadline • tardiness = max(0, lateness) • If a job must never miss its deadline, then the system is described as hard real-time • A timing constraint is hard if the failure to meet it is considered a fatal fault; this definition is based upon the functional criticality of a job • A timing constraint is hard if the usefulness of the results falls off abruptly (or may even go negative) at the deadline • A timing constraint is hard if the user requires validation (formal proof or exhaustive simulation) that the system always meets its timing constraint • Examples: Flight control; Railway signalling; Anti-lock brakes • If some deadlines can be missed occasionally, with acceptably low probability, then the system is described as soft real-time • This is a statistical constraint • Examples: Stock trading system ; DVD player ; Mobile phone

  9. Hard vs Soft Real Time • Note: there may be no advantage in completing a job early • It is often better to keep jitter (variation in timing) in the response times of a stream of jobs small • Timing constraints can be expressed in many ways: • Deterministic • e.g. the relative deadline of every control-law computation is 50 ms; the response time of at most 1 out of 5 consecutive control-law computations exceeds 50ms • Probabilistic • e.g. the probability of the response time exceeding 50 ms is less than 0.2 • In terms of some usefulness function • e.g. the usefulness of every control-law computation is at least 0.8 [In practice, usually deterministic constraints, since easy to validate]

  10. Modeling Periodic task • A set of jobs that are executed repeatedly at regular time intervals can be modelled as a periodic task • Each periodic task Tiis a sequence of jobs Ji,1, Ji,2, …, Ji,n • The phase of a task Tiis the release time ri ,1 of the first job Ji,1 in the task. It is denoted by ϕi (“phi”) • The period pi of a task Tiis the minimum length of all time intervals between release times of consecutive jobs • The execution time eiof a task Tiis the maximum execution time of all jobs in the periodic task • The period and execution time of every periodic task in the system are known with reasonable accuracy at all times • The hyper-period of a set of periodic tasks is the least common multiple of their periods: H = lcm(pi) for i = 1, 2, …, n • The time after which the pattern of job release/execution times starts to repeat

  11. Modeling Periodic Task • The ratio ui= ei/ piis the utilization of task Ti • The fraction of time a periodic task with period pi and execution time ei keeps a processor busy • The total utilization of a system is the sum of the utilizations of all tasks in a system: U = Σui • We will usually assume the relative deadline for the jobs in a task is equal to the period of the task • It can sometimes be shorter than the period, to allow slack time < Many useful, real-world, systems fit this model; and it is easy to reason about such periodic tasks >

  12. Responding to External Event • Many real-time systems are required to respond to external events • The jobs resulting from such events are sporadic or aperiodicjobs • A sporadic job has a hard deadlines • An aperiodic job has either a soft deadline or no deadline • The release time for sporadic or aperiodic jobs can be modelled as a random variable with some probability distribution, A(x) • A(x) gives the probability that the release time of the job is not later than x • Alternatively, if discussing a stream of similar sporadic/aperiodic jobs, A(x) can be viewed as the probability distribution of their inter-release times [Note: sometimes the terms arrival time (or inter-arrival time) are used instead of release time, due to their common use in queuing theory]

  13. Modeling sporadic and areriodic task • A set of jobs that execute at irregular time intervals comprise a sporadic or aperiodic task • Each sporadic/aperiodic task is a stream of sporadic/aperiodic jobs • The inter-arrival times between consecutive jobs in such a task may vary widely according to probability distribution A(x) and can be arbitrarily small • Similarly, the execution times of jobs are identically distributed random variables with some probability distribution B(x) <Sporadic and aperiodic tasks occur in some real-time systems,and greatly complicate modelling and reasoning >

  14. Approaches to Real-Time Scheduling • Different classes of scheduling algorithm used in real-time systems: • Clock-driven • Primarily used for hard real-time systems where all properties of all jobs are known at design time, such that offline scheduling techniques can be used • Weighted round-robin • Primarily used for scheduling real-time traffic in high-speed, switched networks • Priority-driven • Primarily used for more dynamic real-time systems with a mix of • time based and • event-based activities, • where the system must adapt to changing conditions and events .

  15. Clock Driven Scheduling • Decisions about what jobs execute at what times are made at specific time instants • These instants are chosen before the system begins execution • Usually regularly spaced, implemented using a periodic timer interrupt • Scheduler awakes after each interrupt, schedules the job to execute for the next period, then blocks itself until the next interrupt • Typically in clock-driven systems: • All parameters of the real-time jobs are fixed and known • A schedule of the jobs is computed off-line and is stored for use at runtime; as a result, scheduling overhead at run-time can be minimized • Simple and straight-forward, not flexible

  16. Round Robin Scheduling • Regular round-robin scheduling is commonly used for scheduling time-shared applications • Every job joins a FIFO queue when it is ready for execution • When the scheduler runs, it schedules the job at the head of the queue to execute for at most one time slice • Sometimes called a quantum – typically O(tens of ms) • If the job has not completed by the end of its quantum, it is preempted and placed at the end of the queue • When there are n ready jobs in the queue, each job gets one slice every n time slices (n time slices is called a round)

  17. Priority-Driven Scheduling • Assign priorities to jobs, based on some algorithm • Make scheduling decisions based on the priorities, when events such as releases and job completions occur • Priority scheduling algorithms are event-driven • Jobs are placed in one or more queues; at each event, the ready job with the highest priority is executed • The assignment of jobs to priority queues, along with rules such a whether preemption is allowed, completely defines a priority scheduling algorithm • Priority-driven algorithms make locally optimal decisions about which job to run • Locally optimal scheduling decisions are often not globally optimal • Priority-driven algorithms never intentionally leave any resource idle • Leaving a resource idle is not locally optimal

  18. Priority Driven Scheduling • Most scheduling algorithms used in non real-time systems are priority-driven • First-In-First-Out • Last-In-First-Out • Shortest-Execution-Time-First • Longest-Execution-Time-First • Real-time priority scheduling assigns priorities based on deadline or some other timing constraint: • Earliest deadline first • Least slack time first • Etc.

  19. RATE Monotonic Algorithm (RM) • Assumptions • Fixed-priority algorithms • Rate monotonic • Deadline monotonic • Chapter 6 of Liu Book

  20. Assumptions • Focus on well-known priority-driven algorithms for scheduling periodic tasks on a single processor • Assume a restricted periodic task model: • A fixed number of independent periodic tasks exist • Jobs comprising those tasks: • Are ready for execution as soon as they are released • Can be pre-empted at any time • Never suspend themselves • New tasks only admitted after an acceptance test; may be rejected • The period of a task defined as minimum inter-release time of jobs in task • There are no aperiodic or sporadic tasks • Scheduling decisions made immediately upon job release and completion • Algorithms are event driven, never intentionally leave a resource idle • Context switch overhead negligibly small; unlimited priority levels

  21. Fixed- and Dynamic-Priority Algorithms • A priority-driven scheduler is an on-line scheduler • It does not pre-compute a schedule of tasks/jobs: instead assigns priorities to jobs when released, places them on a run queue in priority order • When pre-emption is allowed, a scheduling decision is made whenever a job is released or completed • At each scheduling decision time, the scheduler updates the run queues and executes the job at the head of the queue • Jobs in a task may be assigned the same priority (task level fixed priority) or different priorities (task level dynamic-priority) • The priority of each job is usually fixed (job level fixed-priority); but some systems can vary the priority of a job after it has started (job level dynamic-priority) • Job level dynamic-priority usually very inefficient

  22. Fixed-Priority Scheduling: Rate-Monotonic • Best known fixed-priority algorithm is rate-monotonic scheduling • Assigns priorities to tasks based on their periods • The shorter the period, the higher the priority • The rate (of job releases) is the inverse of the period, so jobs with higher rate have higher priority • Widely studied and used • For example, consider a system of 3 tasks: • T1 = (4, 1) ⇒ rate = 1/4 • T2 = (5, 2) ⇒ rate = 1/5 • T3 = (20, 5) ⇒ rate = 1/20 • Relative priorities: T1 > T2 > T3

  23. Example: Rate-Monotonic Scheduling

  24. Fixed-Priority Scheduling: Deadline-Monotonic • The deadline-monotonic algorithm assigns task priority according to relative deadlines – the shorter the relative deadline, the higher the priority • When relative deadline of every task matches its period, then rate monotonic and deadline-monotonic give identical results • When the relative deadlines are arbitrary: • Deadline-monotonic can sometimes produce a feasible schedule in cases where rate-monotonic cannot • But, rate-monotonic always fails when deadline-monotonic fails =>Deadline-monotonic preferred to rate-monotonic

  25. Schedulability Tests • Simulating schedules is both tedious and error-prone… can we demonstrate correctness without working through the schedule? • Yes, in some cases! This is a schedulability test • A test to demonstrate that all deadlines are met, when scheduled using a particular algorithm • An efficient schedulability test can be used as an on-line acceptance test;clearly exhaustive simulation is too expensive for this!

  26. Scheduable Utilization

  27. Schedulable Utilization of RM

  28. Schedulable Utilization of RM

  29. Schedulable Utilization of RM

  30. Exercise your brain muscle • Assume that you have a system of periodic, independent, pre-emptible tasks to be scheduled on a single processor, T = {Ti} for i = 1..n, where Ti = (φi, pi, ei, and Di) for each i.

  31. (c) Assume that pi = Di for each i. What is the schedulability condition of the Rate Monotonic algorithm for this system? Is this a necessary and sufficient condition? (d) Assume that pi ≤ Di for each i. Under what conditions will the schedulability of the Rate Monotonic algorithm for this system be identical to that stated in response to part (a) above?

  32. Answer • Assume that pi = Di for each i. What is the schedulability condition of the Rate Monotonic algorithm for this system? Is this a necessary and sufficient condition? If the inequality is satisfied, then the system is schedulable using RM. This is a sufficient condition; there may be systems which do not satisfy this inequality that still yield feasible schedules.

  33. Answer Assume that pi ≤ Di for each i. Under what conditions will the schedulability of the Rate Monotonic algorithm for this system be identical to that stated in response to part (c) above? If the system is simply periodic – i.e. if for every pair of tasks Ti and Tk in T such that pi < pk, pk is an integer multiple of pi. This is a necessary and sufficient condition. (read page 153: Scheduleable utilization of Subsets of Simply periodic task.)

More Related