1 / 41

Multiprocessor and Real-Time Scheduling

Multiprocessor and Real-Time Scheduling. Fred Kuhns. Classifications of Multiprocessor. Loosely coupled multiprocessor each processor has its own memory and I/O channels; Clusters Functionally specialized processors such as I/O processor controlled by a master processor

corbin
Télécharger la présentation

Multiprocessor and Real-Time Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiprocessor and Real-Time Scheduling Fred Kuhns

  2. Classifications of Multiprocessor • Loosely coupled multiprocessor • each processor has its own memory and I/O channels; Clusters • Functionally specialized processors • such as I/O processor • controlled by a master processor • Tightly coupled multiprocessing • processors share main memory • controlled by operating system

  3. Synchronization Granularity • Independent Parallelism - No explicit synchronization • Course and Very Coarse Parallelism - similar to running many processes on one processor • good when infrequent interaction among processes • Medium Parallelism - Parallel processing or multitasking within a single application • requires relatively frequent synchronization • Fine Parallelism • Parallelism inherent in a single instruction

  4. Scheduling Design Issues • Assignment of processes to processors • Use of multiprogramming on individual processors • Actual dispatching of a process

  5. Processor Assignment • Processor Pool - Treat processors as a pooled resource and assign process to processors • Static assignment • Dynamic assignment • Dispatch queue arrangements: • Per Processor Queue - Dedicate short-term queue for each processor • Global queue - Schedule to any available processor

  6. Basic Design Architectures • Peer architecture • Operating system can execute on any processor • Each processor does self-scheduling • Complicates the operating system • Make sure two processors do not choose the same process • Master/slave architecture • Key kernel functions always run on a particular processor • Master is responsible for scheduling • Slave sends service request to the master • Disadvantages • Failure of master brings down whole system • Master can become a performance bottleneck

  7. Threads • Executes separate from the rest of the process • An application can be a set of threads that cooperate and execute concurrently in the same address space • Threads running concurrently on separate processors may yield dramatic gain in performance

  8. Multiprocessor Thread Scheduling • Load sharing • global (thread) Ready Queue • Dedicated processor assignment • threads assigned to specific processor • Dynamic scheduling • number of threads altered during execution • Gang scheduling • a set of related threads co-scheduled on set of processors at same time

  9. Load Sharing • Load is distributed evenly across the processors • Assures no processor is idle • No centralized scheduler required • Use global queues

  10. Disadvantages of Load Sharing • Central queue needs mutual exclusion • may be a bottleneck when more than one processor looks for work at the same time • Preemptive threads are unlikely to resume execution on the same processor • cache use is less efficient • unlikely that threads of one program will gain access to the processors simultaneously

  11. Gang Scheduling • Simultaneous scheduling of threads that make up a single process • Useful for applications where performance severely degrades when any part of the application is not running • Threads often need to synchronize with each other

  12. Dedicated Processor Assignment • When application is scheduled, its threads are assigned to a processor • Some processors may be idle • Avoids process switching

  13. Dynamic Scheduling • Number of threads in a process are altered dynamically by the application • Operating system adjusts load to improve use • assign idle processors • new arrivals may be assigned to a processor that is used by a job currently using more than one processor • hold request until processor is available • new arrivals will be given a processor before existing running applications

  14. What is a Real-Time System? • Real-time systems have been defined as: "those systems in which the correctness of the system depends not only on the logical result of the computation, but also on the time at which the results are produced"; • J. Stankovic, "Misconceptions About Real-Time Computing," IEEE Computer, 21(10), October 1988.

  15. Real-Time Characteristics • Real-time systems often are comprised of a controlling system, controlled system and environment. • Controlling system: acquires information about environment using sensors and controls the environment with actuators. • Timing constraintsderived from physical impact of controlling systems activities. Hard and soft constraints. • Periodic Tasks: Time-driven recurring at regular intervals. • Aperiodic: Event-driven.

  16. Typical Real-Time System Controlled System sensor Controlling System Environment sensor sensor sensor actuator actuator actuator actuator

  17. Some Definitions • Timing constraint: constraint imposed on timing behavior of a job: hard or soft. • Release Time: Instant of time job becomes available for execution. If all jobs are released when the system begins execution, then there is said to be no release time • Deadline: Instant of time a job's execution is required to be completed. If deadline is infinity, then job has no deadline. Absolute deadline is equal to release time plus relative deadline • Response time: Length of time from release time to instant job completes.

  18. Hard versus Soft • Hard: failure to meet constraint is a fatal fault. Validated system always meets timing constraints. • Deterministic constraints • Probabilistic constraints • Constraints in terms of some usefulness function. • Soft: late completion is undesirable but generally not fatal. No validation or only demonstration job meets some statistical constraint. Occasional missed deadlines or aborted execution is usually considered tolerable. Often specified in probabilistic terms

  19. Real-Time Systems • Control of laboratory experiments • Process control plants • Robotics • Air traffic control • Telecommunications • Games

  20. Schedules and Scheduling • Jobs scheduled and allocated resources based on a set of scheduling algorithms and access control protocols. • Scheduler: Module implementing scheduling algorithms • Schedule: assignment of all jobs to available processors, produced by scheduler. • Valid schedule: • every processor assigned to at most one job at a time • every job assigned to at most one processor at a time • no job scheduled before its release time • Total amount of processor time assigned to every job is equal to its maximum or actual execution time

  21. Definitions • Feasible schedule: Every job starts at or after release time and completes by deadline • Schedulable: set of jobs schedulable according to an algorithm if the it always produces a feasible schedule. • Optimal: Scheduling algorithm optimal if it always produces a feasible schedule if such a schedule exists • Tardiness: Zero if completion time <= deadline, otherwise > 0 (complete - deadline). • Lateness: difference between completion time and deadline, can be negative if early.

  22. Performance Measures • Miss rate: percentage of jobs executed but completed late • Loss rate: percentage of jobs discarded • Invalid rate: sum of miss and loss rate. • makespan : If all jobs have same release time and deadline, then makespan = response time of the last job to execute. • Max or average response times • Max or average tardiness/lateness

  23. Real-Time Scheduling • Considerations: • schedulability analysis performed • static or dynamic schedulability analysis • analysis result includes schedule • Scheduling Paradigms • static table-driven - static analysis and schedule • static priority driven - static analysis, no (explicit) schedule, priorities. • dynamic planning based - feasibility check at run-time, schedule produced • dynamic best effort - No feasibility check, attempts to meet deadlines but no guarantee.

  24. RTOS • Primary function areas • Process management and synchronization • Memory management • Interprocess communication • I/O • Categories of RTOS • small proprietary • RT extensions to commercial timesharing systems • research operating systems

  25. Small Proprietary OSes • Typically, kernel provides • fast context switch, small size, quick interrupt response time (predictable?), minimize interrupt disable times, memory partitions (no virtual memory, why?), special sequential files (why?) • Timing • bounded execution time of most primitives • real-time clock • priority scheduling mechanism • special alarms and timeout

  26. RT Extensions to Commercial OSes • Advanced SW development environments • Problems: • optimized for average case • assigns resources on demand • ignore application specific information • independent CPU scheduling and resource allocation

  27. Research Operating System • Develop extended or new process models • RT synchronization mechanisms • facilitate timing analysis • fault tolerance • multiprocessor or distributed support • real-time micro-kernel

  28. Clock Driven • Scheduling time instants: When scheduling decisions are made • clock-driven - instants predefined offline • table-driven - table precomputed offline Scheduling time instants

  29. Weighted Round Roben • Round Robin Scheduling - time-sharing systems • Weighted Round Robin: associate weight with each job. • weight w = number of time slices • adjusting weight allocates CPU shares

  30. Priority Driven • Work-conserving: The highest priority, runnable job is always dispatched to available CPU • resources are never idle while runnable jobs • Priority list, preemptions and other rules describe algorithm completely • Example algorithms: EDF, FIFO, LIFO etc. • preemption versus non-preemption

  31. Periodic Task Parameters Utilization = U = C / T Execution Time C Idle Task P Processing Processing Task Period T

  32. Deadline Scheduling • Real-time applications are not concerned with speed but with completing tasks • Scheduling tasks with the earliest deadline minimize the fraction of tasks that miss their deadlines • includes new tasks and amount of time needed for existing tasks

  33. Deadline Scheduling • Information used • Ready time • Starting deadline or Completion deadline • Processing time • not used, given or OS measures an exponential average • Resource requirements • Priority • Relative importance of task • Subtask scheduler • mandatory (hard deadline) or optional subtask

  34. Deadline Scheduling • For a specific preemption strategy and deadline (starting or completion), scheduling task with earliest deadline minimizes missed deadlines. • Starting deadlines - non-preemptive • Ending deadlines - preemptive • Utilization for periodic tasks • U = (C1/T1) + … + (Cn/Tn)  1

  35. Scheduling of Real-Time Tasks 0 10 20 30 40 50 60 70 80 90 100 110 120 A B D E Arrival times C Requirements Starting deadline D A B C E A B D E C Arrival times Service Earliest deadline A C E D C Starting deadline (missed) E B A D

  36. Scheduling of Real-Time Tasks 0 10 20 30 40 50 60 70 80 90 100 110 120 A B D E Arrival times C Requirements Starting deadline D A B C E A B D E C Arrival times Earliest deadline with unforced idle times Service A B C E D C Starting deadline E B A D

  37. Scheduling of Real-Time Tasks 0 10 20 30 40 50 60 70 80 90 100 110 120 A B D E Arrival times C Requirements Starting deadline D A B C E A B D E C Arrival times Service First-come first-served (FCFS) A C D C Starting deadline (missed) E (missed) B A D

  38. Rate Monotonic Scheduling (RMS) • Parameters: • input: Period T and Execution Time C • derive: Utilization = U = C/T • Priority increases with decreasing period • For n tasks, require : • (C1/T1) + … + (Cn/Tn)  n(21/n - 1) • Note: EDF U <= 1 • as n  , n(21/n - 1)  ln2 = 0.693

  39. Rate Monotonic Bounds N n(21/n - 1) 1 1.0 2 0.828 3 0.779 4 0.756 5 0.743 6 0.734 …  0.693

  40. UNIX Scheduling - Solaris • Set of 160 priority levels divided into three priority classes • Basic kernel is not preemptive Scheduling Sequence Priority Class Global Value 159 . first . . Real-time . 100 99 . Kernel . 60 59 . . . Time-shared . last 0

  41. Linux Scheduling • Classes • SCHED_FIFO: First-in-first-out RT threads • SCHED_RR: Round-robin RT threads • SCHED_OTHER: non-RT threads

More Related