1 / 197

Chapter 5, CPU Scheduling

Chapter 5, CPU Scheduling. 5.1 Basic Concepts. The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all times Supporting multi-programming means encoding the ability in the O/S to switch between currently running jobs

duyen
Télécharger la présentation

Chapter 5, CPU Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5, CPU Scheduling

  2. 5.1 Basic Concepts • The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all times • Supporting multi-programming means encoding the ability in the O/S to switch between currently running jobs • Switching between jobs can be non-preemptive or preemptive

  3. Simple, non-preemptive scheduling means that a new process can be scheduled on the CPU only when the current job has begun waiting, for I/O, for example • Non-preemptive means that the O/S will not preempt the currently running job in favor of another one • I/O is the classic case of waiting, and it is the scenario that is customarily used to explain scheduling concepts

  4. The CPU-I/O Burst Cycle • A CPU burst refers to the period of time when a given process occupies the CPU before making an I/O request or taking some other action which causes it to wait • CPU bursts are of varying length and can be plotted in a distribution by length

  5. Overall system activity can also be plotted as a distribution of CPU and other activity bursts by processes • The distribution of CPU burst lengths tends to be exponential or hyperexponential

  6. The CPU scheduler = the short term scheduler • Under non-preemptive scheduling, when the processor becomes idle, a new process has to be picked from the ready queue and have the CPU allocated to it • Note that the ready queue doesn’t have to be FIFO, although that is a simple, initial assumption • It does tend to be some sort of linked data structure with a queuing discipline which implements the scheduling algorithm

  7. Preemptive scheduling • Preemptive scheduling is more advanced than non-preemptive scheduling. • Preemptive scheduling can take into account factors besides I/O waiting when deciding which job should be given the CPU. • A list of scheduling points will be given next. • It is worthwhile to understand what it means.

  8. Scheduling decisions can be made at these points: • A process goes from the run state to the wait state (e.g., I/O wait, wait for a child process to terminate) • A process goes from the run state to the ready state (e.g., as the result of an interrupt) • A process goes from the wait state to the ready state (e.g., I/O completes) • A process terminates

  9. Scheduling has to occur at points 1 and 4. • If it only occurs then, this is non-preemptive or cooperative scheduling • If scheduling is also done at points 2 and 3, this is preemptive scheduling

  10. Points 1 and 4 are given in terms of the job that will give up the CPU. • Points 2 and 3 seem to relate to which process might become available to run that could preempt the currently running process.

  11. Historically, simple systems existed without timers, just like they existed without mode bits, for example • It is possible to write a simple, non-preemptive operating system for multi-programming without multi-tasking • Without a timer or other signaling, jobs could only be switched when one was waiting for I/O

  12. However, recall that much of the discussion in the previous chapters assumed the use of interrupts, timers, etc., to trigger a context switch • This implies preemptive scheduling • Preemptive schedulers are more difficult to write than non-preemptive schedulers, and they raise complex technical questions

  13. The problem with preemption comes from data sharing between processes • If two concurrent processes share data, preemption of one or the other can lead to inconsistent data, lost updates in the shared data, etc.

  14. Note that kernel data structures hold state for user processes. • The user processes do not directly dictate what the kernel data structures contain, but by definition, the kernel loads the state of >1 user process

  15. This means that the kernel data structures themselves have the characteristic of data shared between processes • As a consequence, in order to be correctly implemented, preemptive scheduling has to prevent inconsistent state in the kernel data structures

  16. Concurrency is rearing its ugly head again, even though it still hasn’t been thoroughly explained. • The point is that it will become apparent that concurrency is a condition that is inherent to a preemptive scheduler. • Therefore, a complete explanation of operating systems eventually requires a complete explanation of concurrency issues.

  17. The idea that the O/S is based on shared data about processes can be explained concretely by considering the movement of PCB’s from one queue to another • If an interrupt occurs while one system process is moving a PCB, and the PCB has been removed from one queue, but not yet added to another, this is an error state • In other words, the data maintained internally by the O/S is now wrong/broken/incorrect…

  18. Possible solutions to the problem • So the question becomes, can the scheduler be coded so that inconsistent queue state couldn’t occur? • One solution would be to only allow switching on I/O blocks. • The idea is that interrupts will be queued rather than instantaneous (a queuing mechanism will be needed)

  19. This means that processes will run to a point where they can be moved to an I/O queue and the next process will not be scheduled until that happens • This solves the problem of concurrency in preemptive scheduling in a mindless way • This solution basically means backing off to non-preemptive scheduling

  20. Other solutions to the problem • 1. Only allow switching after a system call runs to completion. • In other words, make kernel processes uninterruptible. • If the code that moves PCB’s around can’t be interrupted, inconsistent state can’t result. • This solution also assumes a queuing system for interrupts.

  21. 2. Make certain code segments in the O/S uninterruptible. • This is the same idea as the previous one, but with finer granularity. • It increases concurrency because interrupts can at least occur in parts of kernel code, not just at the ends of kernel code calls.

  22. Note that interruptibility of the kernel is related to the problem of real time operating systems • If certain code blocks are not interruptible, you are not guaranteed a fixed, maximum response time to any particular system request or interrupt that you generate

  23. You may have to wait an indeterminate amount of time while the uninterruptible code finishes processing • This violates the requirement for a hard real-time system

  24. Scheduling and the dispatcher • The dispatcher = the module called by the short term scheduler which • Switches context • Switches to user mode • Jumps to the location in user code to run • Speed is desirable. • Dispatch latency refers to time lost in the switching process

  25. Scheduling criteria • There are various algorithms for scheduling • There are also various criteria for evaluating them • Performance is always a trade-off • You can never maximize all of the criteria with one scheduling algorithm

  26. Criteria • CPU utilization. The higher, the better. 40%-90% is realistic • Throughput = processes completed / unit time • Turnaround time = total time for any single process to complete

  27. Waiting time = total time spent waiting in O/S queues • Response time = time between submission and first visible sign of response to the request • This is important in interactive systems

  28. Depending on the criterion, you may want to • Strive to attain an absolute maximum or minimum (utilization, throughput) • Minimize or maximize the average (turnaround, waiting) • Minimize or maximize the variance (for time-sharing, minimize the variance, for example)

  29. 5.3 Scheduling Algorithms • 5.3.1 First-Come, First-Served (FCFS) • 5.3.2 Shortest-Job-First (SJF) • 5.3.3 Priority • 5.3.4 Round Robin (RR) • 5.3.5 Multilevel Queue • 5.3.6 Multilevel Feedback Queue

  30. Reality involves a steady stream of many, many CPU bursts • Reality involves balancing a number of different performance criteria or measures • It is worth keeping these facts in the back of your mind • Examples of the different scheduling algorithms will be given below but these examples will be based on a very few processes and a limited number of bursts

  31. The examples will be given as a short list of processes and their burst times. • This information will be turned into simple Gantt charts which make it possible to visualize the situation. • The scheduling algorithms will be evaluated and compared based on a simple measure of average waiting time. • It is relatively simple to see the values needed for the calculation using a Gantt chart.

  32. FCFS Scheduling • The name, first-come, first-served, should be self-explanatory • This is an older, simpler scheduling algorithm • It is non-preemptive • It is not suitable for interactive time sharing • It can be implemented with a simple FIFO (ready, or scheduling) queue of PCB’s

  33. Consider the following scenario • Process Burst length • P1 24 ms. • P2 3 ms. • P3 3 ms.

  34. Avg. wait time = (0 + 24 + 27) / 3 =17 ms.

  35. Compare with a different arrival order: • P2, P3, P1

  36. Avg. wait time = (0 + 3 + 6) / 3 =3 ms.

  37. Additional comments on performance analysis • It is clear that average wait time varies greatly depending on the arrival order of processes and their varying burst lengths • As a consequence, it is also possible to conclude that for any given set of processes and burst lengths, arbitrary FCFS scheduling does not result in a minimal or optimal average wait time

  38. FCFS scheduling is subject to the convoy effect • There is the initial arrival order of process bursts • After that, the processes enter the ready queue after I/O waits, etc. • Let there be one CPU bound job (long CPU burst) • Let there be many I/O bound jobs (short CPU bursts)

  39. Scenario: • The CPU bound job holds the CPU • The other jobs finish their I/O waits and enter the ready queue • Each of the other jobs is scheduled, FCFS, and is quickly finished with the CPU due to an I/O request • The CPU bound job then takes the CPU again

  40. CPU utilization may be high (good) under this scheme • The CPU bound job is a hog • The I/O bound jobs spend a lot of their time waiting • Therefore, the average wait time will tend to be high • Recall that FCFS is not preemptive, so once the jobs have entered, scheduling only occurs when a job voluntarily enters a wait state due to an I/O request or some other condition

  41. SJF Scheduling • The name, shortest-job-first, is not quite self-explanatory • Various ideas involved deserve explanation • Recall that these thumbnail examples of scheduling are based on bursts, not the overall job time • For scheduling purposes, it is the length of the next burst that is important

  42. In a sense, SJF is really shortest-next-CPU burst-first scheduling • There is no perfect way of predicting the length of the next burst • Implementing SJF in practice involves devising formulas for predicting the next burst length based on past performance

  43. SJF can be a non-preemptive algorithm. • When one process relinquishes the CPU for an I/O wait, for example, all other processes are available for scheduling at this time t = 0 • The job with the shortest predicted next CPU burst would be chosen for scheduling

  44. SJF can also be implemented as a preemptive algorithm. • Jobs enter the ready queue at different times. • These may be new jobs that have just entered the system or jobs that have finished waiting because the system has handled their I/O request

  45. Let a job enter the ready queue while another job is running. • Let the newly ready job have a shorter predicted CPU burst time than the predicted remaining CPU burst time of the currently running job. • Then the newly ready job preempts the current job. • Under the preemptive scenario a more descriptive name for the algorithm would be “shortest remaining time first” scheduling.

  46. A non-preemptive example will be given first • Its performance characteristics will be compared with FCFS scheduling • Then there will be a discussion of how to predict burst times • This will be followed by a preemptive example

  47. Non-preemptive Example • Consider the following scenario: • Process burst length • P1 6 ms. • P2 8 ms. • P3 7 ms. • P4 3 ms. • Recall that for a miniature scenario like this, the assumption is that all jobs (bursts) are available for scheduling at time t = 0.

  48. SJF order: P4, P1, P3, P2average wait time = (0 + 3 + 9 + 16) / 4 =7 ms.

  49. In general, the average wait time for SJF scheduling is lower than the average wait time for FCFS scheduling of the same processes • This is illustrated by scheduling the jobs of this example in FCFS order and comparing the average wait time • Although the assumption is that all bursts are available at time t = 0, for the comparison with FCFS, the arrival order in the ready queue makes a difference • Let the arrival order be represented by the subscripts • The example using FCFS scheduling is given next

More Related