200 likes | 312 Vues
Lecture Topics: 11/15. CPU scheduling: Scheduling goals and algorithms. Review: Process State. A process can be in one of several states (new, ready, running, waiting, terminated). The OS keeps track of process state by maintaining a queue of PCBs for each state.
E N D
Lecture Topics: 11/15 • CPU scheduling: • Scheduling goals and algorithms
Review: Process State • A process can be in one of several states (new, ready, running, waiting, terminated). • The OS keeps track of process state by maintaining a queue of PCBs for each state. • The ready queue contains PCBs of processes that are waiting to be assigned to the CPU.
The Scheduling Problem • Need to share the CPU between multiple processes in the ready queue. • OS needs to decide which process gets the CPU next. • Once a process is selected, OS needs to do some work to get the process running on the CPU.
How Scheduling Works • The scheduler is responsible for choosing a process from the ready queue. The scheduling algorithm implemented by this module determines how process selection is done. • The scheduler hands the selected process off to the dispatcher which gives the process control of the CPU.
When Does The OS Make Scheduling Decisions ? • Scheduling decisions are always made: • when a process is terminated, and • when a process switches from running to waiting. • Scheduling decisions aremade when an interrupt occurs only in a preemptive system.
How is the CPU Shared? • Non-preemptive scheduling: • The scheduler must wait for a running process to voluntarily relinquish the CPU (process either terminates or blocks). • Why would you choose this approach? • What are the disadvantages of using this scheme?
How is the CPU Shared? • Preemptive scheduling: • The OS can force a running process to give up control of the CPU, allowing the scheduler to pick another process. • What are the advantages of using this scheme? • What are the implementation challenges presented by this approach?
Scheduling Goals • Maximize throughput and resource utilization. • Need to overlap CPU and I/O activities. • Minimize response time, waiting time and turnaround time. • Share CPU in a fair way. • May be difficult to meet all these goals-- sometimes need to make tradeoffs.
CPU and I/O Bursts • Typical process execution pattern: use the CPU for a while (CPU burst), then do some I/O operations (IO burst). • CPU bound processes perform I/O operations infrequently and tend to have long CPU bursts. • I/O bound processes spend less time doing computation and tend to have short CPU bursts.
Scheduling Algorithms • First Come First Served (FCFS) • Scheduler selects the process at the head of the ready queue; typically non-preemptive • Example: 3 processes arrive at the ready queue in the following order: P1 ( CPU burst = 24 ms) P2 ( CPU burst = 3 ms) P3 ( CPU burst = 3 ms)
Scheduling Algorithms • First Come First Served (FCFS) • Example: What’s the average waiting time? • FCFS pros & cons: • Simple to implement. • Average waiting time can be large if processes with short CPU bursts get stuck behind processes with long CPU bursts.
Scheduling Algorithms • Round Robin (RR) • FCFS + preemptive scheduling. • Ready queue is treated as a circular queue. • Each process gets the CPU for a time quantum (or time slice), typically 10 - 100 ms. • A process runs until it blocks, terminates, or uses up its time slice.
Scheduling Algorithms • Round Robin (RR) • Example: Assuming a time quantum of 4 ms, what’s the average waiting time? • Issue: What’s the right value for the time quantum? • Too long => poor response time • Too short => poor throughput
Scheduling Algorithms • Round Robin (RR) • RR pros & cons: • Works well for short jobs; typically used in timesharing systems. • High overhead due to frequent context switches. • Increases average waiting time, especially if CPU bursts are the same length and need more than one time quantum.
Scheduling Algorithms • Priority Scheduling • Select the process with the highest priority. • Priority based on some attribute of the process (e.g., memory requirements, owner of process, etc.) • Issue: • Starvation: low priority jobs may wait indefinitely. Can prevent starvation by aging (increase process priority as a function of the waiting time).
Scheduling Algorithms • Shortest Job First (SJF) • Special case of priority scheduling (priority = expected length of CPU burst) • Non-preemptive version: scheduler chooses the process with the least amount of work to do (shortest CPU burst). • Preemptive version: scheduler chooses the process with the shortest remaining time to completion -- a running process can be interrupted.
Scheduling Algorithms • Shortest Job First (SJF) • Example: What’s the average waiting time? • Issue: How do you predict the future? • In practice systems use past process behavior to predict the length of the next CPU burst.
Scheduling Algorithms • Shortest Job First (SJF) • SJF pros & cons: • Better average response time. • It’s the best you can do to minimize average response time-- can prove the algorithm is optimal. • Difficult to predict the future. • Unfair-- possible starvation (many short jobs can keep long jobs from making any progress).
Scheduling Algorithms • Multi-level Queues • Maintain multiple ready queues based on process “type” (e.g., interactive queue, CPU bound queue, etc.). • Each queue has a priority; a process is permanently assigned to a queue. • May use a different scheduling algorithm in each queue. • Need to decide how to schedule between queues.
Scheduling Algorithms • Multi-level Feedback Queues • Adaptive algorithm: Processes can move between queues based on past behavior. • Round robin scheduling in each queue -- time slice increases in lower priority levels. • Queue assignment: • Put new process in highest priority queue. • If CPU burst takes longer than one time quantum, drop one level. • If CPU burst completes before timer expires, move up one level.