1 / 58

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling. Chapter 5: CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation (time permitting). Objectives.

gefjun
Télécharger la présentation

Chapter 5: CPU Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: CPU Scheduling

  2. Chapter 5: CPU Scheduling • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Thread Scheduling • Multiple-Processor Scheduling • Operating Systems Examples • Algorithm Evaluation (time permitting)

  3. Objectives • To introduce CPU scheduling, which is the basis for multiprogrammed operating systems • To describe various CPU-scheduling algorithms • To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

  4. Basic Concepts • Maximum CPU utilization obtained with multiprogramming Several processes kept in memory at one time When one has to wait, the O.S. gives the CPU to anothe process • An observed property of processes – Process execution consists of a cycle of CPU execution and I/O wait CPU burst  I/O burst  CPU burst  I/O burst  … • CPU burst distribution (next slide)

  5. Alternating Sequence of CPU And I/O Bursts

  6. Histogram of CPU-burst Times large number of short CPU burst (characterized as exponential or hyper-exponential) small number of long CPU burst

  7. CPU Scheduler • Selects from among the processesin memory that are ready to execute, and allocates the CPU to one of them (Note that the ready queue is not necessarily a FIFO queue) • CPU scheduling decisions may take place when a process: • Switches from running to waiting state (result of an I/O request, invocation of wait()) • Switches from running to ready state (occurrence of an interrupt) • Switches from waiting to readystate (completion of I/O, select one of the processes waiting to become ready) 4.Terminates • Scheduling under 1 and 4 is non-preemptive, or cooperative (Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state) • All other scheduling is preemptive See pp. 103 for the state transition diagram

  8. Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program • Dispatch latency – time it takes for the dispatcher to stop one process and start another running

  9. Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from the time a request was submitted until the first response is produced, not output (for time-sharing environment)

  10. Scheduling Algorithm Optimization Criteria • Max CPU utilization • Max throughput • Min turnaround time • Min waiting time • Min response time

  11. P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling (Non-preemptive scheduling) • ProcessBurst Time • P1 24 • P2 3 • P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 • Assuming they arrive at time 0 (approximately) • The Gantt Chart for the schedule is: • Waiting time forP1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17

  12. P2 P3 P1 0 3 6 30 FCFS Scheduling (Cont) • Suppose that the processes arrive in the order • P2 , P3 , P1 • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3) / 3 = 3 • Much better than previous case • Convoy effect : due to short processes behind long process • (short jobs escort the long one to finish)

  13. Shortest-Job-First (SJF) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time (a more appropriate term: shortest-next-CPU-burst algorithm) • SJF is optimal – gives minimum average waiting time for a given set of processes • The difficulty: How to know the length of the next CPU request • * For long-term scheduling, we can use the time specified by the user • * For short-term scheduling: cannot be implemented,  try approximated SJF scheduling by predicting the next CPU burst based on previous ones

  14. P3 P2 P4 P1 3 9 16 24 0 Example of Non-preemptive SJF ProcessBurst Time P1 6 P2 8 P3 7 P4 3 • SJF scheduling chart • Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P1 P2 P3 P4

  15. Example of Preemptive SJF (Shortest-remaining-time-first) Process Arrival TimeBurst Time P1 0.0 8 P2 1.0 4 P3 2.0 9 P4 3.0 5 SJF scheduling chart Average waiting time = [(10-1) + (1-1) + (17-2) + (5-3)] / 4 = 6.5 P1 P2 P3 P4 P1 P2 P4 P1 P3 0 1 5 10 17 26

  16. Determining Length of Next CPU Burst • Can only estimate the length • Can be done by using the length of previous CPU bursts, using exponential averaging n+1 = t + (1 - ) n-1 n n-1 Can be defined as a constant or as an overall system average 0 See textbook for the equation of full expansion

  17. Prediction of the Length of the Next CPU Burst For = 1 / 2

  18. Examples of Exponential Averaging •  =0 • n+1 = n • Recent history does not count •  =1 • n+1 = tn • Only the actual last CPU burst counts • If we expand the formula, we get: n+1 =  tn+(1 - ) tn-1+ (1 -  )2 tn-2+ … + (1 -  )j tn-j+ … + (1 -  )n +1 0 • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor

  19. Priority Scheduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer : either highest or lowest priority) • Equal priority processes are scheduled in FCFS order • Scheduling can be: • Preemptive • Nonpreemptive • SJF is a special case of priority scheduling where priority is the inverse of the (predicted) next CPU burst time • Problem Starvation – low priority processes may never execute • Solution  Aging – as time progresses increase the priority of the process that has waited for a long time

  20. Round Robin (RR) • Similar to FCFS, but preemption is added for switching between processes. • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)*q time units. • Performance • q extremely large  FIFO • qtoo small  too much overhead due to context switch • We want qto be large with respect to context switch, but not too large

  21. P1 P2 P3 P1 P1 P1 P1 P1 0 10 14 18 22 26 30 4 7 Example of RR with Time Quantum = 4 ProcessBurst Time P1 24 P2 3 P3 3 • The Gantt chart is: • Typically, higher average turnaround than SJF, but better response

  22. Time Quantum and Context Switch Time

  23. Turnaround Time Depends on the Time Quantum For q = 4 | | | | | | | 0 4 7 8 12 14 17 P1 P2 P3 P4 P1 P4 Turnaround time: P1 = 14 P2 = 7 P3 = 8 P4 = 17 Total = 46 Average = 11.5

  24. Multilevel Queue • Ready queue is partitioned into separate queues: For example, foreground (interactive) -- has higher priority and background (batch) • Each queue has its own scheduling algorithm • foreground – RR • background – FCFS • Scheduling must be done between the queues • Fixed priority scheduling; (e.g., serve all from foreground then from background). Possibility of starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; for example 80% to foreground in RR 20% to background in FCFS

  25. Multilevel Queue Scheduling An example of multilevel queue scheduling algorithm A running process will be preempted if a higher-priority process enter its ready queue

  26. Multilevel Feedback Queue • Allows a process to move between the various queues; aging can be implemented this way • Multilevel-feedback-queue scheduler is defined by the following parameters: • number of queues • scheduling algorithms for each queue • method used to determine when to upgrade a process to a higher priority queue • method used to determine when to demote a process to a lower priority queue • method used to determine which queue a process will enter when that process needs service • Most general, yet most complex scheduling algorithm

  27. Example of Multilevel Feedback Queue • Three queues: • Q0 – RR with time quantum 8 milliseconds • Q1 – RR time quantum 16 milliseconds • Q2 – FCFS • Scheduling • A new job enters queue Q0. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to the tail of queue Q1. • When Q0 is empty, the head of Q1 is given a quantum of16 milliseconds. If it still does not complete, it is preempted and moved to queue Q2. • This scheduling gives highest priority to processes with a CPU burst of 8 milliseconds or less

  28. Multilevel Feedback Queues

  29. Thread Scheduling • Distinction between user-level and kernel-level threads On operating systems that support threads, it is kernel-level threads, not processes that are being scheduled by the O.S. • For many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP • Known as process-contention scope (PCS) since scheduling competition is among threads within the same process • Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system For the one-to-one model, the O.S. schedules threads using only SCS (Windows XP, Solaris, Linux)

  30. Pthread Scheduling • Pthread API allows specifying either PCS or SCS during thread creation • PTHREAD SCOPEPROCESS schedules threads using PCS scheduling • PTHREAD SCOPE SYSTEM schedules threads using SCS scheduling. • Can be limited by OS – Linux and Mac OS X only allow PTHREAD_SCOPE_SYSTEM

  31. Pthread Scheduling API (pp.201, 8E, see also pp. 161) #include <pthread.h> #include <stdio.h> #define NUM THREADS 5 int main (int argc, char *argv[]) { int i; pthread_t tid [NUM_THREADS]; pthread_attr_t attr; /* get the default attributes */ pthread_attr_init(&attr); /* set the scheduling algorithm to PROCESS or SYSTEM */ pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); /* set the scheduling policy - FIFO, RR, or OTHER */ pthread_attr_setschedpolicy (&attr, SCHED_OTHER); /* FIFO & RR: for real-time processes (static priority btw 1 and 99). */ /* OTHER: default Linux time-sharing scheduler */ /* (for processes with static priority 0, based on dynamic priority) */

  32. Pthread Scheduling API /* create the threads */ for (i = 0; i < NUM_THREADS; i++) pthread_create (&tid[i], &attr, runner, NULL); /* now join on each thread */ for (i = 0; i < NUM_THREADS; i++) pthread_join (tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { printf("I am a thread\n"); pthread exit(0); }

  33. Pthread Scheduling API (from 9E) #include <pthread.h> #include <stdio.h> #define NUM_THREADS 5 int main(int argc, char *argv[]) { int i, scope; pthread_t tid[NUM THREADS]; pthread_attr_t attr; /* get the default attributes */ pthread_attr_init(&attr); /* first inquire on the current scope */ if (pthread_attr_getscope(&attr, &scope) != 0) fprintf(stderr, "Unable to get scheduling scope\n"); else { if (scope == PTHREAD_SCOPE_PROCESS) printf("PTHREAD_SCOPE_PROCESS"); else if (scope == PTHREAD_SCOPE_SYSTEM) printf("PTHREAD_SCOPE_SYSTEM"); else fprintf(stderr, "Illegal scope value.\n"); }

  34. Pthread Scheduling API (from 9E) /* set the scheduling algorithm to PCS or SCS */ pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); /* create the threads */ for (i = 0; i < NUM_THREADS; i++) pthread_create(&tid[i],&attr,runner,NULL); /* now join on each thread */ for (i = 0; i < NUM_THREADS; i++) pthread_join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param){ /* do some work ... */ pthread_exit(0); }

  35. Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available • Homogeneous processors within a multiprocessor assumed • Approach 1: Asymmetric multiprocessing– only one processor accesses the system data structures, alleviating the need for data sharing. Other processors execute only user code • Approach 2: Symmetric multiprocessing (SMP)– each processor is self-scheduling. All processes may be in common ready queue, or each has its own private queue of ready processes  must ensure that two processors do not choose the same process and that processes are not lost from the queue. • Processor affinity – process has affinity for processor on which it is currently running  to avoid migration of processes from one processor to another • soft affinity (OS provides best effort, not guarantteed) • hard affinity (OS allows a process to specify that it is not to migrate to another process)

  36. NUMA (Non-Uniform Memory Access) and CPU Scheduling The main-memory architecture can affect processor affinity issues. Example: For NUMA, CPU has faster access to some parts of main memory than to other parts A combined CPU and memory board A process that is assigned affinity to a particular CPU can be allocated memory from the board where the CPU resides if the CPU scheduler and memory-placement algorithms work together.

  37. Multicore Processors • Recent trend to place multiple processor cores on same physical chip • Faster and consume less power than systems in which each processor has its own physical chip. • Multiple threads per core also growing • Takes advantage of memory stall to make progress on another thread while memory retrieve happens Memory stall: When a processor access memory, it may spend a significant amount of time waiting for data to become available. This situation is called memory stall

  38. Multithreaded Multicore System Thread 1 Thread 0

  39. Operating System Examples • Solaris scheduling. • Windows XP scheduling • Linux scheduling

  40. Solaris Scheduling Kernel maintains 10 threads for servicing interrupts. They do not belong to any scheduling class Each thread belongs to one of these six classes for running kernel threads (scheduler, paging daemon, …) CPU shares allocated to a set of processes (project) default class: TS like TS, for windowing applications

  41. Solaris Dispatch Table for TS and IA Threads new priority of a thread that has used its entire time quantum without blocking, considered to be cpu-intensive Higher number higher priority such as waiting for I/O milliseconds lower Lower priority Longer time quantum higher priority new priority new priority

  42. Windows XP Priorities (Win32 API) priority class real-time class variable class (priority of a thread belonging to one of these classes may change relative priority within a class base relative priority by default Processes are normally in this class Windows XP: Priority-based, preemptive scheduling, a queue for each scheduling priority

  43. Linux Scheduling • Constant order O(1) scheduling time: the algorithm runs in constant time, regardless of the number of tasks on the system • Preemptive, priority-based algorithm with two priority ranges: real-time and time-sharing • Real-time tasks: priorities range from 0 to 99 (static priorities) Other tasks: nice values range from 100 to 140 (dynamic priorities) • Lower number  Higher priority • Higher-prioritytasks are assigned longer time quanta.

  44. Priorities and Time-slice length

  45. List of Tasks Indexed According to Priorities Each runqueue contains two priority arrays: active and expired Active array contains tasks with time remaining in their time slices Expired array contains all expired tasks Each list is indexed according to priority (list 0, list 1, …, list 140)

  46. Algorithm Evaluation • Deterministic modeling – analytic, takes a particular predetermined workload and defines the performance of each algorithm for that workload. Example: see P.214 • Queueing models. Use arrival rates and service rates to compute the utilization, average queue length, average wait time, … Little’s formula: N = λ * W N: avg. queue length λ: avg. process arrival rate W: avg. waiting time in the queue. • Simulations (next slide) • Implementation. Code the algorithm, put it in the OS and see how it works. High cost, users may not like it, environment may change.

  47. Evaluation of CPU schedulers by Simulation

  48. End of Chapter 5

  49. 5.08

  50. In-5.7

More Related