1 / 61

NETW 3005

NETW 3005. Scheduling. Reading. For this lecture, you should have read Chapter 5 (Sections 1-3, 7). Last Lecture. Cooperating processes and data-sharing. This lecture. Criteria for scheduling algorithms Some scheduling algorithms: first-come-first-served, shortest-job-first,

terrybrian
Télécharger la présentation

NETW 3005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NETW 3005 Scheduling

  2. Reading • For this lecture, you should have read Chapter 5 (Sections 1-3, 7). Lecture 04 - Scheduling

  3. Last Lecture • Cooperating processes and data-sharing Lecture 04 - Scheduling

  4. This lecture • Criteria for scheduling algorithms • Some scheduling algorithms: • first-come-first-served, • shortest-job-first, • priority scheduling, • round-robin scheduling, • multilevel queue scheduling. Lecture 04 - Scheduling

  5. Important Notice • There is a TUTORIAL TEST on CPU scheduling in tutorial 10B (8-9 May). • It’s worth 9.5%. • The questions in Tutorial 10A are practice for this exam. Lecture 04 - Scheduling

  6. CPU scheduling: a recap ready queue CPU I/O request I/O device queue I/O device queue fork I/O I/O interrupt mechanism Lecture 04 - Scheduling

  7. Context switching Process P0 Operating System Process P1 executing idle Interrupt or system call save state into PCB0 reload state from PCB1 Interrupt or system call save state into PCB1 reload state from PCB0 idle executing executing idle Lecture 04 - Scheduling

  8. Scheduler and dispatcher • The scheduler decides which process to give the CPU to next, and when to give it. • Its decisions are carried out by the dispatcher. • Dispatching involves: • Switching context • Switching to ‘user mode’ • Jumping to the proper location in the new process. Lecture 04 - Scheduling

  9. Why do we want a scheduler? (1) • Simple answer – to keep the CPU busy. • This means removing processes from the CPU while they’re waiting. • If processes never had to wait, then scheduling wouldn’t increase CPU utilisation. • However, it’s a fact that processes tend to exhibit a CPU burst cycle. Lecture 04 - Scheduling

  10. How long is a CPU burst? • Simple answer – it varies depending on machine and job mix. • Typically, majority are around 4 – 6 ms. Lecture 04 - Scheduling

  11. Why do we want a scheduler? (2) • Another reason for having a scheduler is so that processes don’t have to spend too much time waiting for the CPU. • Even if the CPU is always busy, executing processes in different orders can change the average amount of time a process spends queuing for the CPU. Lecture 04 - Scheduling

  12. P1 P2 P3 P4 Lecture 04 - Scheduling

  13. Why do we want a scheduler? (3) • One question is how long a process spends waiting for the CPU in total. • A different question is how long, on average, it waits inbetween visits to the CPU (important for interactive processes.) Lecture 04 - Scheduling

  14. Metrics for scheduling algorithms(1) • CPU utilisation: the percentage of time that the CPU is usefully busy. • Throughput: the number of processes that are completed per time unit. • Turnaround time (per process): the elapsed time between submission time and completion time. Lecture 04 - Scheduling

  15. Metrics for scheduling algorithms(2) • Waiting time (per process): the total amount of time the process spends waiting for the CPU. • Response time (per process): the average time from the submission of a request to a process until the first response is produced. Lecture 04 - Scheduling

  16. The ready queue • Remember: there are two kinds of waiting: • Waiting for the CPU (in the ready queue) • Waiting for an I/O device (in a device queue). • Don’t be confused when you hear about processes waiting in the ready queue! Lecture 04 - Scheduling

  17. Preemption • In a non-preemptive scheduling system, a process keeps the CPU until it switches to the waiting state (pending I/O com-pletion or termination of a child process), or terminates. • In a preemptive scheduling system, the currently executing process can be removed from the CPU at any time. Lecture 04 - Scheduling

  18. Interrupts • A possible context switch is triggered • when a new process arrives in the ready queue; • when a waiting process finishes waiting; • when a certain amount of time has elapsed. Lecture 04 - Scheduling

  19. Problems • What if a process is preempted while a system call is being executed? Kernel data (e.g. I/O queues) might be left in an inconsistent state. • Earlier versions of UNIX dealt with this problem by waiting until system calls were completed before switching context. Lecture 04 - Scheduling

  20. Preemptive systems • MS Windows 3.1 and below were non-preemptive. • Windows 95, NT, XP etc are preemptive. • Linux is fully preemptive as of 2.6. • MacOS 9 and earlier versions weren’t (although they claimed to be). MacOS X is. Lecture 04 - Scheduling

  21. Gantt charts • The operation of a scheduling algorithm is commonly represented in a Gantt chart. Process P1 P2 P3 Arrival time t1(= 0) .   t2 t3 Burst time b1 b2 b3 (N.B. We’re just looking at the initial CPU burst for each process.) Lecture 04 - Scheduling

  22. First-come-first-served scheduling • The simplest method is to execute the processes in the ready queue on a first-come-first-served (FCFS) basis. • When a process becomes ready, it is put at the tail of the queue. • When the currently executing process terminates, or waits for I/O, the process at the front of the queue is selected next. • This algorithm is non-preemptive. Lecture 04 - Scheduling

  23. Gantt Charts for FCFS Process P1 P2 P3 Arrival time 0 1 2 Burst time 24 3 3 P1 P2 P3 24 27 30 P1 P2 P3 Lecture 04 - Scheduling

  24. Waiting times? • P1? • P2? • P3? • Average? Lecture 04 - Scheduling

  25. Waiting times? • P1? 0 ms • P2? • P3? • Average? Lecture 04 - Scheduling

  26. Waiting times? • P1? 0 ms • P2? 24 – 1 = 23 ms • P3? • Average? Lecture 04 - Scheduling

  27. Waiting times? • P1? 0 ms • P2? 24 – 1 = 23 ms • P3? 27 – 2 = 25 ms • Average? Lecture 04 - Scheduling

  28. Waiting times? • P1? 0 ms • P2? 24 – 1 = 23 ms • P3? 27 – 2 = 25 ms • Average? (0 + 23 + 25) / 3 = 16 ms Lecture 04 - Scheduling

  29. Gantt Charts for FCFS (2) Process P1 P2 P3 Arrival time 2 0 1 Burst time 24 3 3 P2 P3 P1 3 6 30 P1 P2 P3 Lecture 04 - Scheduling

  30. Waiting times? • P1? • P2? • P3? • Average? Lecture 04 - Scheduling

  31. Waiting times? • P1? 6 – 2 = 4 ms • P2? • P3? • Average? Lecture 04 - Scheduling

  32. Waiting times? • P1? 6 – 2 = 4 ms • P2? 0 ms • P3? • Average? Lecture 04 - Scheduling

  33. Waiting times? • P1? 6 – 2 = 4 ms • P2? 0 ms • P3? 3 – 1 = 2 ms • Average? Lecture 04 - Scheduling

  34. Waiting times? • P1? 6 – 2 = 4 ms • P2? 0 ms • P3? 3 – 1 = 2 ms • Average? (4 + 0 + 2) / 3 = 2 ms Lecture 04 - Scheduling

  35. Advantages of FCFS • Easy to implement. • Easy to understand. Lecture 04 - Scheduling

  36. Disadvantages of FCFS • Waiting time not likely to be minimal. • Convoy effect: lots of small processes can get stuck behind one big one. • Poor response time — bad for time-sharing systems). • Q: could throughput (no of processes passing through the system per unit time) be improved? Lecture 04 - Scheduling

  37. Shortest-job-first scheduling • If we knew in advance which process on the list had the shortest burst time, we could choose to execute that process next. • Processes with equal burst times are executed in FCFS order. • This method is called shortest-job-first (SJF) scheduling. Lecture 04 - Scheduling

  38. Example Process P1 P2 P3 P4 Burst time 6 8 7 3 P3 P4 P1 P2 9 16 3 24 Lecture 04 - Scheduling

  39. Advantages of SJF • Provably optimal average waiting time. Lecture 04 - Scheduling

  40. Disadvantages of SJF • You never know in advance what the length of the next CPU burst is going to be. • Possibility of long processes never getting executed? Lecture 04 - Scheduling

  41. Predicting CPU burst length • It’s likely to be similar in length to the previous CPU bursts. • A commonly-used formula: the exponential average CPU burst length. τn+1= α tn+ (1 − α)τn τn: the predicted length of CPU burst n. tn: the actual length of CPU burst n. α: a value between 0 and 1. Lecture 04 - Scheduling

  42. Preemption and SJF scheduling • Scenario: • A process P1 is currently executing. • A new process P2 arrives before P1 is finished. • P2’s (predicted) burst time is shorter than the (expected) remaining burst time of P1. • Non-preemptive SJF: P1 keeps the CPU. • Preemptive SJF: P2 takes the CPU. Lecture 04 - Scheduling

  43. Tiny example (with preemption): Process P1 P2 Arrival time 0 1 Burst time 8 4 P1 P2 P1 0 1 5 12 Lecture 04 - Scheduling

  44. Priority scheduling • In priority scheduling: • each process is allocated a priority when it arrives; • the CPU is allocated to the process with highest priority. • Priorities are represented by numbers, with low numbers being highest priority. Lecture 04 - Scheduling

  45. Questions • Can priority scheduling be preemptive? Lecture 04 - Scheduling

  46. Questions • Can priority scheduling be preemptive? Yes, no reason why not. Lecture 04 - Scheduling

  47. Questions • Can priority scheduling be preemptive? Yes, no reason why not. • What’s the relation of SJF scheduling to priority scheduling? Lecture 04 - Scheduling

  48. Questions • Can priority scheduling be preemptive? Yes, no reason why not. • What’s the relation of SJF scheduling to priority scheduling? • SJF is a type of priority scheduling, specifically one where the priority of a process is set to be the estimated next CPU burst time. Lecture 04 - Scheduling

  49. Starvation and aging • Starvation occurs when a process waits indefinitely to be allocated the CPU. • Priority scheduling algorithms are susceptible to starvation. • Imagine a process P1 is waiting for the CPU, and a stream of higher-priority processes is arriving. • If these processes arrive sufficiently fast, P1 will never get a chance to execute. Lecture 04 - Scheduling

  50. Starvation and aging • A solution to the starvation problem is to increase the priority of processes as a function of how long they’ve been waiting for the CPU. (Called aging.) Lecture 04 - Scheduling

More Related