1 / 51

Scheduling: Chapter 3

Scheduling: Chapter 3. Process: Entity competing for resources Process states: New, running, waiting, ready, terminated, zombie (and perhaps more). See also Fig. 3.2 on page 87. 7. terminated. Zombie. 6. waiting. Running. 4. 3. 5. 2. 8. Ready. 1. Hold. Partial state diagram.

Télécharger la présentation

Scheduling: Chapter 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling: Chapter 3 • Process: Entity competing for resources • Process states: New, running, waiting, ready, terminated, zombie (and perhaps more). • See also Fig. 3.2 on page 87.

  2. 7 terminated Zombie 6 waiting Running 4 3 5 2 8 Ready 1 Hold Partial state diagram

  3. States • Hold: waiting for a time to gain entry • Ready: can compute if OS allows • Running: currently executing • Waiting: waiting for an event (e.g. an I/O to complete) • Zombie: process has finished • terminated: process finished and parent waited on it.

  4. State Transitions • gains entry to system at specified time (1) • OS gives process right to use CPU (2) • OS removes right of the process to use CPU (time’s up) (3) • process makes a request (e.g. issues an I/O request) or issues a wait or pause among other things. (4)

  5. event has occurred (5) (e.g. I/O completed) • process has finished (6) • parent waited on process (7) • OS suspends process (maybe too much activity and it must reduce the load) (8)

  6. High-level scheduling (Also long-term): Which programs gain entry to the system or exit it. Essentially 1, 6, and 7 above. • Intermediate-level scheduling: Which processes can compete for CPU. Essentially 4, 5, and 8 above. • Low-level scheduling (Also short-term): Who gets the CPU. Essentially 2 and 3 above.

  7. Goals and thoughts • fairness • maximize throughput • minimize turnaround • minimize response time • Consistency • May be incompatible

  8. avoid loads that degrade system • keep resources busy (e.g. I/O controllers) to maximize concurrent activities. • high priority to interactive users • deal with compute-bound vs. I/O bound processes. • keep CPU busy

  9. should all processes have the same priority? • should OS distinguish between processes that have done a lot so far from those that have done little? • Consider limits.

  10. NonPreemptive scheduling: when a process gets the CPU it keeps it until done • Preemptive scheduling: What the OS giveth, the OS can taketh

  11. PCB (Process Control Block): • Every process has one and it contains: • state • program counter • CPU register values • accounting information

  12. in general, the processcontext • Saving or storing information in a PCB is called a context switch and usually happens when a process gets or loses CPU control.

  13. See also task_struct, the linux PCB (page 90 of the text) • located at /usr/src/kernels/2.6.18-128.4.1.el5-xen-i686/include/linux/sched.h (line 834). Can copy into command line by copying from this document and right clicking the mouse at the command line prompt.

  14. Process lists are really PCB lists.

  15. Can skip 3.4 (forks and IPC). Some of this we did (shared memory); some we'll do later (message passing). • Can skip 3.5 (more message passing) • Can skip 3.6 (sockets and RPCs). Some of that’s done in the networks course

  16. Chapter 4 deals largely with threads. I will postpone that until a little later when I introduce java threads and synchronization.

  17. Chapter 5: CPU Scheduling • Typically programs alternate: CPU burst-I/O burst, CPU burst-I/O burst, CPU burst-I/O burst, etc. Fig 5.1 on p. 168. • Compute bound: mostly CPU bursts (e.g. simulations, graphics) • I/O bound: Mostly I/O bursts (e.g. interactions, database)

  18. Scheduling algorithms: • First come-first serve (FIFO): • process that asks first gets the CPU first. Keeps it until done or until it requests something it must wait for. • Show Gantt chart on p. 173; shows avg wait time and turnaround time. • avg times vary according the process at the front of the Q

  19. inappropriate for many environments • many processes could wait a long time for a compute bound process (bad if they’re interactive or need to request their own I/O). • Might be OK if most processes are compute bound (primarily for simulations) • Sometimes used in conjunction with other methods. • Might be useful in specialized environments where most tasks are compute bound ones.

  20. SJF (Shortest Job First) • Orders processes in order of next CPU burst. • Preemptive: if new process enters it may replace a currently running process. • Can be useful if the OS wants to give high priority to a task like to have a short CPU burst and, thus, keep I/O controllers busy.

  21. may not know length of CPU burst time • Can estimate, based on time limits in JCL (Job Control Language) or…… • Can predict burst length based on previous burst lengths and predictions.

  22. Possible option: use an exponential average defined by tn+1 = atn + (1 – a)tn Variable tn+1 is the predicted value for the next burst and tn is the length of the nth burst. a is some constant

  23. In general tn+1 = atn + (1 – a) a tn-1 + … + (1 – a)ja tn-j + … + (1 – a)n+1t0 • If a = 0, recent history has no effect • If a = 1, only most recent burst matters. • See Figure 5.3 for an example. • See Gantt chart on page 176

  24. Priority scheduling • Priority associated with each process and scheduled accordingly. • See Gantt chart on page 177 • Indefinite postponement, indefinite blocking, starvation: These are all terms that apply to a process that may wait indefinitely due to low priority.

  25. NOTE: textbook cites a rumor that when the IBM 7094 at MIT was shut down in 1973, they found a low priority process that had been there since 1967. • Can deal with this by periodically increasing priorities of processes that are waiting • This is called aging.

  26. Round Robin • Processes just take turns. • Gantt chart on page 178 • Process at front of Q runs until • it finishes • it issues a request for which it must wait (e.g. I/O) • time quantum (maximum length of uninterrupted execution time) expires

  27. Quantum size is an issue. • Large quantum looks more like FCFS • A process waits longer for “its turn” • Small quantum generates frequent context switches (OS intervention). • Since OS uses CPU a higher percent of the time the processes use it less.

  28. r e s p o n s e quantum size

  29. Round Robin does not react much to a changing environment – for example more or fewer I/O requests • Treats all processes the same, which may or may not be appropriate. • I/O bound tasks have same priority as CPU bound ones. Does that make sense?

  30. Multilevel Feedback Queue Scheduling • Multiple Qs • highest priority Q has shortest quantum • Lowest priority Q has longest quantum • quanta range from small to large over all Qs • Schedule from highest priority Q that has a ready process

  31. Process runs until • it finishes • it issues a request for which it must wait (e.g. I/O) • When ready again, enter the next higher priority queue (if one) • time quantum (for that Q) expires. • Go to the next lower priority queue (if one).

  32. Interactive processes: keep high priority • Compute bound processes typically have low priority. • In the presence of mostly compute bound processes, acts more like FIFO because of the longer quantum • In the presence of mostly I/O bound processes, acts like Round Robin. • Can react to a changing environment!

  33. Real-time Scheduling • Hard real-time: MUST complete a task in a specified amount of time. • Usually requires special hardware (since Virtual memory, paging, and secondary storage can make the time unpredictable).

  34. Soft real-time: • Critical processes receive priority over non-critical ones. • Can be implemented using Multilevel Feedback Queues where the highest queues are reserved for the real-time processes.

  35. Threads (just a couple of highlights from Chapter 4) • Thread: Lightweight process • Threads in the same process share code, data, files, etc, but have different stacks and registers. • Note the examples on the web site (thread.c and process.c)

  36. Kernel threads: managed by the OS kernel. • User threads: managed by a thread library (no kernel support) • Less kernel overhead

  37. User-kernel thread relationship • Many user threads map to one kernel thread • If one thread blocks, the entire process blocks • Cannot run multiple threads in parallel • Green threads (from Solaris) • GNU portable threads • One-to-one • User threads can operate more independently • More flexible, but more burden on the kernel • Typical of windows and Linux

  38. Three main thread libraries • POSIX (Portable Operating Systems Interface) – an interface standard with worldwide acceptance. IEEE standard [http://standards.ieee.org/regauth/posix/index.html] Also [https://computing.llnl.gov/tutorials/pthreads/] • Win32 threads • Java threads (cover later)

  39. Multiple processor scheduling • Asymmetric multiprocessing: All scheduling routines run on the master processor. • Symmetric multiprocessing (SMP): each processor is self-scheduling. • Common queue for all processors • One queue for each processor • We’ll consider SMP

  40. If a common queue for all processors then there are issues of multiple processors accessing and updating a common data structure. • There are many issues associated with this type of concurrency, which we cover in the next chapter.

  41. Processor Affinity • May want to keep a process associated with the same processor. • If a process moves to another processor, current processor’s cache is invalidated and new processor’s cache must be updated • Soft affinity: try but no guarantee • Hard affinity: guarantee

  42. Load balancing • Keep workload balanced – i.e. avoid idle processors if there are ready processes. • More difficult if each processor has its own ready queue (typical of most OS’s). • May also run counter to processor affinity.

  43. Push migration • OS task periodically check processor queues and may redistribute tasks to balance the load. • Pull migration • An idle processor pulls a task from another processor’s queue • Linux, for example, does push migration several times per second and pull migration if a queue is empty.

  44. Multicore processors (not in text) • One chip – multiple core processors, each with its own register set. • One thread per core seems logical but presents problems. • Memory stall: processor waits for data to become available such as may happen in a cache or TLB miss. • Waiting processors mean no work being done.

  45. Multithreaded processor core: two or more threads assigned to a single core. • Could interleave threads – i.e. when one thread is waiting, the other executed instruction cycle. • If one thread stalls, the processor can switch to the other thread

  46. From the OS point of view, each hardware thread is a separate core capable of running a software thread. • i.e. OS may see 4 logical processors on dual-core chip

  47. Windows XP: Read through this: • Some stuff on page 833-834 and page 191-192 • uses 32-level priority scheme (top half are soft real-time). • Basically a multilevel Feedback Queue. • Each thread has a base priority and priorities cannot fall below that. • CTRL-ALT-Del to get task manager - right click on task to see priority and affinity.

  48. Threads get a boost when a wait is over. • Amount of boost depends on what the wait was for. • Threads waiting for a keyboard (or mouse) I/O get a larger boost that if it were waiting for a disk I/O. • Boost will NOT put thread into the real-time range

  49. Linux • Processes have credits (priority) • High number is low priority • Some stuff on page 796 and page 193-194 • Enter the Linux top command to see processes and their priorities.

  50. Processes also have a nice value, which can affect scheduling. See info nice. • nice values range from -20 (least nice) to 19 (nicest) • There’s also a nice command which runs a process with a specific nice value. • There’s also a renice command which can change nice values of a running process. Its format is renice n pid where n is the nice value. • Usually need to be root to get more favorable treatment.

More Related