1 / 66

Multiprocessor and Real-Time Scheduling

Multiprocessor and Real-Time Scheduling. Chapter 10. Classifications of Multiprocessor Systems. Loosely coupled multiprocessor Each processor has its own memory and I/O channels Functionally specialized processors Such as I/O processor Controlled by a master processor

shona
Télécharger la présentation

Multiprocessor and Real-Time Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiprocessor and Real-Time Scheduling Chapter 10

  2. Classifications of Multiprocessor Systems • Loosely coupled multiprocessor • Each processor has its own memory and I/O channels • Functionally specialized processors • Such as I/O processor • Controlled by a master processor • Tightly coupled multiprocessing • Processors share main memory • Controlled by operating system

  3. Scheduling and Synchronization • Scheduling concurrent processes has to take into account the synchronization of processes. • Scheduling decisions for one process may affect another process if the two processes are synchronized. • Synchronization granularity means the frequency of synchronization between processes in a system

  4. Types of synchronization granularity • Fine– parallelism inherent in a single instruction stream • Medium– parallel processing or multitasking within a single application • Coarse– multiprocessing of concurrent processes in a multiprogramming environment • Very coarse– distributed processing across network nodes to form a single computing environment • Independent– multiple unrelated processes

  5. Independent Parallelism • Separate application or job • No synchronization • Same service as a multiprogrammed uniprocessor • Example: time-sharing system

  6. Coarse and Very Coarse-Grained Parallelism • Synchronization among processes at a very gross level (e.g. at the beginning and at the end) • Good for concurrent processes running on a multiprogrammed uniprocessor • Can by supported on a multiprocessor with little change

  7. Medium-Grained Parallelism • Parallel processing or multitasking within a single application • Single application is a collection of threads • Threads usually interact frequently

  8. Fine-Grained Parallelism • Highly parallel applications • Specialized and fragmented area

  9. Schedulingdesign issues • Assignment of processes to processors • Use of multiprogramming on individual processors • Actual dispatching of a process • The scheduling depends on • degree of granularity • number of processors available

  10. Issue #1Assignment of Processes to Processors • Treat processors as a pooled resource and assign process to processors on demand • Static or dynamic assignment? • Master/slave or peer architecture?

  11. Static vs. dynamic assignment • Static: dedicate short-term queue for each processor • Advantages: Less overhead • Disadvantages: Processor could be idle while another processor has a backlog • Dynamic: use a global queue and schedule processes to any available processor

  12. Type of architecture - master/slave or peer? • Master/slave architecture • Key kernel functions always run on a particular processor • Master is responsible for scheduling • Advantages: simple approach • Disadvantages • Failure of master brings down whole system • Master can become a performance bottleneck

  13. Type of architecture - master/slave or peer? • Peer architecture • Operating system can execute on any processor • Each processor does self-scheduling • Disadvantages • Complicates the operating system • Make sure two processors do not choose the same process

  14. Issue #2:Multiprogramming on a single processor • Depends on the synchronization granularity • A. Course-grained parallelism: • processor utilization considerations • Each individual processor should be able to switch among a number of processes • B. Medium-grained parallelism: • application performance considerations: • An application that consists of a number of threads may run poorly unless all of its threads are available to run simultaneously

  15. Issue #3:Process dispatching • Uniprocessor systems: • sophisticated scheduling algorithms • Multiprocessor systems • processes: simple algorithms work best (see Fig.10.1) • threads: the main consideration is the synchronization between threads

  16. Process Scheduling • Basic method used: • dynamic assignment • Single queue for all processes • Multiple queues are used for priorities • All queues feed to the common pool of processors • specific scheduling discipline is much less important with multiprocessorswith one processor (see Fig.10.1)

  17. Threads • An application can be a set of threads that cooperate and execute concurrently in the same address space • Threads running on separate processors yield a dramatic gain in performance

  18. Multiprocessor Thread Scheduling • Load sharing • Processes are not assigned to a particular processor • Gang scheduling • Simultaneous scheduling of threads that make up a single process

  19. Multiprocessor Thread Scheduling • Dedicated processor assignment • Threads are assigned to a specific processor • Dynamic scheduling • Number of threads can be altered during course of execution

  20. Load Sharing • One of the most commonly used schemes in current multiprocessors, despite the potential disadvantages • Load is distributed evenly across the processors • No centralized scheduler required • Use global queues

  21. Versions of Load Sharing • First come first served (FCFS) • Smallest number of threads first: • priority queue, with highest priority given to threads from jobs with the smallest number of unscheduled threads. • Preemptive smallest number of threads first: • An arriving job with a smaller number of threads than an executing job will preempt threads belonging to the scheduled job.

  22. Disadvantages of Load Sharing • The common queue needs mutual exclusion • May be a bottleneck when more than one processor looks for work at the same time • Preempted threads are unlikely to resume execution on the same processor • Cache use is less efficient • If all threads are in the global queue, all threads of a program will not gain access to the processors at the same time

  23. Gang Scheduling • Simultaneous scheduling of threads that make up a single process • Useful for applications where performance severely degrades when any part of the application is not running • Threads often need to synchronize with each other

  24. Gang scheduling • Processor allocation • N processors, M applications • each application - less than N threads • options: • each application is given 1/M of the available time on the N processors • the given time slice is proportional to the number of threads in each application

  25. Scheduling Groups

  26. Dedicated Processor Assignment • When application is scheduled, its threads are assigned to a set of processors, one-to-one, for the duration of the application. • Some processors may be idle • No multiprogramming of processors

  27. Dedicated Processor Assignment • Motivation • In a highly parallel system processor utilization is no longer so important • Total avoidance of process switching

  28. Dedicated Processor Assignment • Problem: • If the number of the active threads is greater than the number of the processors, there would be greater frequency of thread preemption and rescheduling

  29. Dedicated Processor Assignment • Comparison with gang scheduling: • Similarities - threads are assigned to processors at the same time • Differences - in dedicated processor assignment threads do not change processors.

  30. Gang scheduling and Dedicated Processor Assignment • More similar to memory assignment than to uniprocessor scheduling: • In memory scheduling, pages are assigned to processes • In Gang scheduling and dedicated processor assignment processors are assigned to processes.

  31. Dynamic Scheduling • Both the operating system and the application are involved in making scheduling decisions. • The operating system is responsible for partitioning the processors among jobs. • Each job uses the processors currently in its partition to execute some subset of its runnable tasks by mapping these tasks to threads.

  32. Dynamic Scheduling • On request for a processor • If there are idle processors, use them to satisfy the request. • Otherwise, if it is a new arrival, allocate it a single processor (by taking one away from any job currently allocated more than one processor.) • If any portion of the request cannot be satisfied, it remains outstanding

  33. Upon release of one or more processors • Scan the current queue of unsatisfied requests for processors. • Assign a single processor to each job in the list that currently has no processors (i.e., to all waiting new arrivals). • Then scan the list again, allocating the rest of the processors on an FCFS basis.

  34. Real-Time Systems • Correctness of the system depends not only on the logical result of the computation but also on the time at which the results are produced • Tasks or processes attempt to control or react to events that take place in the outside world • These events occur in “real time” and process must be able to keep up with them

  35. Real-Time Systems • Control of laboratory experiments • Process control plants • Robotics • Air traffic control • Telecommunications • Military command and control systems

  36. Types of Tasks • A. With respect to urgency • Hard real-time task • must meet its deadline • Soft real-time task • deadline is desirable but not mandatory

  37. Types of Tasks • B. With respect to execution • Aperiodic task has a deadline by which it must finish or start, or both • Periodic task executes once per period T or exactly T units apart.

  38. Characteristics of Real-Time Operating Systems • Areas of concern • Determinism • Responsiveness • User control • Reliability • Fail-soft operation

  39. Characteristics of Real-Time Operating Systems • Determinism • Operations are performed at fixed, predetermined times or within predetermined time intervals • Concerned with how long the operating system delays before acknowledging an interrupt

  40. Characteristics of Real-Time Operating Systems • Responsiveness • How long, after acknowledgment, it takes the operating system to service the interrupt • Includes amount of time to begin execution of the interrupt • Includes the amount of time to perform the interrupt

  41. Characteristics of Real-Time Operating Systems • User control • User specifies priority • Specifies paging • What processes must always reside in main memory • Disks algorithms to use • Rights of processes

  42. Characteristics of Real-Time Operating Systems • Reliability • Degradation of performance may have catastrophic consequences • fail-soft operation - the ability to fail in such a way as to preserve as much capability and data as possible • Attempt either to correct the problem or minimize its effects while continuing to run

  43. Features of Real-Time Operating Systems • Small size (with its associated minimal functionality) • Fast process or thread switch • Ability to respond to external interrupts quickly • Use of special sequential files that can accumulate data at a fast rate

  44. Features of Real-Time Operating Systems • Ability to respond to external interrupts quickly • Multitasking with interprocess communication tools such as semaphores, signals, and events • Preemptive scheduling based on priority • Minimization of intervals during which interrupts are disabled • Primitives to delay tasks for a fixed amount of time and to pause/resume tasks • Special alarms and time-outs

  45. Real-Time Scheduling Approaches • When to dispatch • How to schedule • (1) whether a system performs schedulability analysis, • (2) if it does, whether it is done statically or dynamically, and • (3) whether the result of the analysis itself produces a schedule or plan according to which tasks are dispatched at run time. • Types of tasks periodic/aperiodic • deadline scheduling • rate monotonic scheduling

  46. Scheduling of a Real-Time ProcessWhen to dispatch

  47. Scheduling of a Real-Time ProcessWhen to dispatch

  48. Scheduling of a Real-Time ProcessWhen to dispatch

  49. Scheduling of a Real-Time ProcessWhen to dispatch

  50. Real-Time SchedulingHow to schedule • Static table-driven • Static priority-driven preemptive • Dynamic planning-based • Dynamic best effort

More Related