1 / 79

Threads and Scheduling

6. Threads and Scheduling. Announcements. Homework Set #2 due Friday at 11 am - extension Program Assignment #1 due Thursday Feb. 10 at 11 am Read chapters 6 and 7. Program #1: Threads - addenda. draw the picture of user space threads versus kernel space threads

Télécharger la présentation

Threads and Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 6 Threads and Scheduling Operating Systems: A Modern Perspective, Chapter 6

  2. Announcements • Homework Set #2 due Friday at 11 am - extension • Program Assignment #1 due Thursday Feb. 10 at 11 am • Read chapters 6 and 7 Operating Systems: A Modern Perspective, Chapter 6

  3. Program #1: Threads - addenda • draw the picture of user space threads versus kernel space threads • user space threads yield voluntarily to switch between threads • because it’s user space, the CPU doesn’t know about these threads • your program just looks like a single-threaded process to CPU • Inside that process, use library to create and delete threads, wait a thread, and yield a thread • this is what you’re building • Advantage: can implement threads on any OS, faster - no trap to kernel, no context switch • Disadvantage: only voluntary scheduling, no preemption, blocked I/O on one user thread blocks all threads Operating Systems: A Modern Perspective, Chapter 6

  4. Program #1: Threads - addenda • each process keeps a thread table • analogous to process table of PCB’s kept by OS kernel for each process • Key question: how do we switch between threads? • need to save thread state and change the PC • PA #1 does it like this • scheduler is a global user thread, while your threads a and b are user, but local (hence on the stack) • stack pointer, frame pointer Operating Systems: A Modern Perspective, Chapter 6

  5. Operating Systems: A Modern Perspective, Chapter 6

  6. Process Fetch Code and Data CPU Execution Program Counter (PC) Code Registers ALU Write Data What is a Process? Main Memory • A process is a program actively executing from main memory • has a Program Counter (PC) and execution state associated with it • CPU registers keep state • OS keeps process state in memory • it’s alive! • has an address space associated with it • a limited set of (virtual) addresses that can be accessed by the executing code Program P1 binary Data Heap Stack Operating Systems: A Modern Perspective, Chapter 6

  7. How is a Process Structured in Memory? Run-time memory max address User stack • Run-time memory image • Essentially code, data, stack, and heap • Code and data loaded from executable file • Stack grows downward, heap grows upward Unallocated Heap Read/write .data, .bss Read-only .init, .text, .rodata address 0 Operating Systems: A Modern Perspective, Chapter 6

  8. Process P1 Code Code Code Data Heap Stack Multiple Processes Main Memory Process P2 OS • Process state, e.g. ready, running, or waiting • accounting info, e.g. process ID • Program Counter • CPU registers • CPU-scheduling info, e.g. priority • Memory management info, e.g. base and limit registers, page tables • I/O status info, e.g. list of open files PCB for P1 Data Heap Stack PCB for P2 More Data, Heap, Stack Operating Systems: A Modern Perspective, Chapter 6

  9. Process P1 Code Code Code Data Heap Stack Multiple Processes Main Memory Process P2 OS CPU Execution PCB for P1 Program Counter (PC) Data Heap Stack PCB for P2 ALU More Data, Heap, Stack Operating Systems: A Modern Perspective, Chapter 6

  10. 1 Initialization 7 8 Interrupt 2 3 4 9 5 6 Context Switching Executable Memory • Each time a process is switched out, its context must be saved, e.g. in the PCB • Each time a process is switched in, its context is restored • This usually requires copying of registers Process Manager Interrupt Handler P1 P2 Pn Operating Systems: A Modern Perspective, Chapter 6

  11. Threads • A thread is a logical flow of execution that runs within the context of a process • has its own program counter (PC), register state, and stack • shares the memory address space with other threads in the same process, • share the same code and data and resources (e.g. open files) Operating Systems: A Modern Perspective, Chapter 6

  12. Threads • Why would you want multithreaded processes? • reduced context switch overhead • In Solaris, context switching between processes is 5x slower than switching between threads • shared resources => less memory consumption => more threads can be supported, especially for a scalable system, e.g. Web server must handle thousands of connections • inter-thread communication is easier and faster than inter-process communication • thread also called a lightweight process Operating Systems: A Modern Perspective, Chapter 6

  13. Thread 3 Thread 2 Thread 1 Code Code PC2 PC1 PC3 Reg. State Reg. State Reg. State Stack Stack Stack Threads Main Memory Process P1’s Address Space Process P2 • Process P1 is multithreaded • Process P2 is single threaded • The OS is multiprogrammed • If there is preemptive timeslicing, the system is multitasked Data Heap Data Heap Stack Operating Systems: A Modern Perspective, Chapter 6

  14. State Stack Stack Map Processes &Threads State Map Address Space Program Static data Resources Operating Systems: A Modern Perspective, Chapter 6

  15. Thread-Safe/Reentrant Code • If two threads share and execute the same code, then the code needs to be thread-safe • the use of global variables is not thread safe • the use of static variables is not thread safe • the use of local variables is thread safe • need to govern access to persistent data like global/static variables with locking and synchronization mechanisms • reentrant is a special case of thread-safe: • reentrant code does not have any references to global variables • thread-safe code protects and synchronizes access to global variables Operating Systems: A Modern Perspective, Chapter 6

  16. User-Space and Kernel Threads • pthreads is a POSIX user space threading API • provides interface to create, delete threads in the same process • threads will synchronize with each other via this package • no need to involve the OS • implementations of pthreads API differ underneath the API • Kernel threads are supported by the OS • kernel must be involved in switching threads • mapping of user-level threads to kernel threads is usually one-to-one Operating Systems: A Modern Perspective, Chapter 6

  17. Model of Process Execution Preemption or voluntary yield New Process Ready List Scheduler CPU Done job job “Running” job “Ready” Resource Manager Allocate Request job job “Blocked” Resources Operating Systems: A Modern Perspective, Chapter 6

  18. Ready Process The Scheduler From Other States Process Descriptor Enqueuer Ready List Context Switcher Dispatcher CPU Running Process Operating Systems: A Modern Perspective, Chapter 6

  19. Invoking the Scheduler • Need a mechanism to call the scheduler • Voluntary call • Process blocks itself • Calls the scheduler • Involuntary call • External force (interrupt) blocks the process • Calls the scheduler Operating Systems: A Modern Perspective, Chapter 6

  20. Voluntary CPU Sharing yield(pi.pc, pj.pc) { memory[pi.pc] = PC; PC = memory[pj.pc]; } • pi can be “automatically” determined from the processor status registers yield(*, pj.pc) { memory[pi.pc] = PC; PC = memory[pj.pc]; } Operating Systems: A Modern Perspective, Chapter 6

  21. More on Yield • pi and pj can resume one another’s execution yield(*, pj.pc); . . . yield(*, pi.pc); . . . yield(*, pj.pc); . . . • Suppose pj is the scheduler: // p_i yields to scheduler yield(*, pj.pc); // scheduler chooses pk yield(*, pk.pc); // pk yields to scheduler yield(*, pj.pc); // scheduler chooses ... Operating Systems: A Modern Perspective, Chapter 6

  22. Voluntary Sharing • Every process periodically yields to the scheduler • Relies on correct process behavior • process can fail to yield: infinite loop either intentionally (while(1)) or due to logical error (while(!DONE)) • Malicious • Accidental • process can yield to soon: unfairness for the “nice” processes who give up the CPU, while others do not • process can fail to yield in time: • another process urgently needs the CPU to read incoming data flowing into a bounded buffer, but doesn’t get the CPU in time to prevent the buffer from overflowing and dropping information • Need a mechanism to override running process Operating Systems: A Modern Perspective, Chapter 6

  23. Involuntary CPU Sharing • Interval timer • Device to produce a periodic interrupt • Programmable period IntervalTimer() { InterruptCount--; if(InterruptCount <= 0) { InterruptRequest = TRUE; InterruptCount = K; } } SetInterval(programmableValue) { K = programmableValue: InterruptCount = K; } } Operating Systems: A Modern Perspective, Chapter 6

  24. Involuntary CPU Sharing (cont) • Interval timer device handler • Keeps an in-memory clock up-to-date (see Chap 4 lab exercise) • Invokes the scheduler IntervalTimerHandler() { Time++; // update the clock TimeToSchedule--; if(TimeToSchedule <= 0) { <invoke scheduler>; TimeToSchedule = TimeSlice; } } Operating Systems: A Modern Perspective, Chapter 6

  25. Contemporary Scheduling • Involuntary CPU sharing – timer interrupts • Time quantum determined by interval timer – usually fixed size for every process using the system • Sometimes called the time slice length Operating Systems: A Modern Perspective, Chapter 6

  26. Process Descriptor Ready Process Enqueue Ready List Context Switch Dispatch CPU Running Process Choosing a Process to Run • Mechanism never changes • Strategy = policy the dispatcher uses to select a process from the ready list • Different policies for different requirements Operating Systems: A Modern Perspective, Chapter 6

  27. Policy Considerations • Policy can control/influence: • CPU utilization • Average time a process waits for service • Average amount of time to complete a job • Could strive for any of: • Equitability • Favor very short or long jobs • Meet priority requirements • Meet deadlines Operating Systems: A Modern Perspective, Chapter 6

  28. Optimal Scheduling • Suppose the scheduler knows each process pi’s service time, t(pi) -- or it can estimate each t(pi) : • Policy can optimize on any criteria, e.g., • CPU utilization • Waiting time • Deadline • To find an optimal schedule: • Have a finite, fixed # of pi • Know t(pi) for each pi • Enumerate all schedules, then choose the best Operating Systems: A Modern Perspective, Chapter 6

  29. However ... • The t(pi) are almost certainly just estimates • General algorithm to choose optimal schedule is O(n2) • Other processes may arrive while these processes are being serviced • Usually, optimal schedule is only a theoretical benchmark – scheduling policies try to approximate an optimal schedule Operating Systems: A Modern Perspective, Chapter 6

  30. Model of Process Execution Preemption or voluntary yield New Process Ready List Scheduler CPU Done job job “Running” job “Ready” Resource Manager Allocate Request job job “Blocked” Resources Operating Systems: A Modern Perspective, Chapter 6

  31. Talking About Scheduling ... • Let P = {pi | 0  i < n} = set of processes • Let S(pi)  {running, ready, blocked} • Let t(pi) = Time process needs to be in running state (the service time) • Let W(pi) = Time pi is in ready state before first transition to running (wait time) • Let TTRnd(pi) = Time from pi first enter ready to last exit ready (turnaround time) • Batch Throughput rate = inverse of avg TTRnd • Timesharing response time = W(pi) Operating Systems: A Modern Perspective, Chapter 6

  32. Simplified Model Preemption or voluntary yield New Process Ready List Scheduler CPU Done job job “Running” job “Ready” Resource Manager Allocate Request job job “Blocked” Resources • Simplified, but still provide analysis result • Easy to analyze performance • No issue of voluntary/involuntary sharing Operating Systems: A Modern Perspective, Chapter 6

  33. Estimating CPU Utilization New Process Ready List Scheduler CPU Done Let l = the average rate at which processes are placed in the Ready List, arrival rate Let m = the average service rate  1/ m = the average t(pi) l pi per second System Each pi uses 1/ m units of the CPU Operating Systems: A Modern Perspective, Chapter 6

  34. Estimating CPU Utilization New Process Ready List Scheduler CPU Done Let l = the average rate at which processes are placed in the Ready List, arrival rate Let m = the average service rate  1/ m = the average t(pi) Let r = the fraction of the time that the CPU is expected to be busy r = # pi that arrive per unit time * avg time each spends on CPU r = l * 1/ m = l/m • Notice must have l < m (i.e., r < 1) • What if r approaches 1? Operating Systems: A Modern Perspective, Chapter 6

  35. Nonpreemptive Schedulers Blocked or preempted processes New Process Ready List Scheduler CPU Done • Try to use the simplified scheduling model • Only consider running and ready states • Ignores time in blocked state: • “New process created when it enters ready state” • “Process is destroyed when it enters blocked state” • Really just looking at “small phases” of a process Operating Systems: A Modern Perspective, Chapter 6

  36. First-Come-First-Served i t(pi) 0 350 1 125 2 475 3 250 4 75 0 350 p0 TTRnd(p0) = t(p0) = 350 W(p0) = 0 Operating Systems: A Modern Perspective, Chapter 6

  37. First-Come-First-Served i t(pi) 0 350 1 125 2 475 3 250 4 75 350 475 p0 p1 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 W(p0) = 0 W(p1) = TTRnd(p0) = 350 Operating Systems: A Modern Perspective, Chapter 6

  38. First-Come-First-Served i t(pi) 0 350 1 125 2 475 3 250 4 75 475 950 p0 p1 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 Operating Systems: A Modern Perspective, Chapter 6

  39. First-Come-First-Served i t(pi) 0 350 1 125 2 475 3 250 4 75 950 1200 p0 p1 p2 p3 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 Operating Systems: A Modern Perspective, Chapter 6

  40. First-Come-First-Served i t(pi) 0 350 1 125 2 475 3 250 4 75 1200 1275 p0 p1 p2 p3 p4 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 W(p4) = TTRnd(p3) = 1200 Operating Systems: A Modern Perspective, Chapter 6

  41. FCFS Average Wait Time i t(pi) 0 350 1 125 2 475 3 250 4 75 • Easy to implement • Ignores service time, etc • Not a great performer 0 350 475 900 1200 1275 p0 p1 p2 p3 p4 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 W(p4) = TTRnd(p3) = 1200 Wavg = (0+350+475+950+1200)/5 = 2974/5 = 595 Operating Systems: A Modern Perspective, Chapter 6

  42. Predicting Wait Time in FCFS • In FCFS, when a process arrives, all in ready list will be processed before this job • Let m be the service rate • Let L be the ready list length • Wavg(p) = L*1/m + 0.5* 1/m = L/m+1/(2m) • Compare predicted wait with actual in earlier examples Operating Systems: A Modern Perspective, Chapter 6

  43. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 0 75 p4 W(p4) = 0 TTRnd(p4) = t(p4) = 75 Operating Systems: A Modern Perspective, Chapter 6

  44. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 0 75 200 p4 p1 W(p1) = 75 W(p4) = 0 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p4) = t(p4) = 75 Operating Systems: A Modern Perspective, Chapter 6

  45. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 0 75 200 450 p4 p1 p3 W(p1) = 75 W(p3) = 200 W(p4) = 0 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 Operating Systems: A Modern Perspective, Chapter 6

  46. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 0 75 200 450 800 p4 p1 p3 p0 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p3) = 200 W(p4) = 0 Operating Systems: A Modern Perspective, Chapter 6

  47. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 0 75 200 450 800 1275 p4 p1 p3 p0 p2 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 Operating Systems: A Modern Perspective, Chapter 6

  48. Shortest Job Next i t(pi) 0 350 1 125 2 475 3 250 4 75 • Minimizes wait time • May starve large jobs • Must know service times 0 75 200 450 800 1275 p4 p1 p3 p0 p2 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 Wavg = (450+75+800+200+0)/5 = 1525/5 = 305 Operating Systems: A Modern Perspective, Chapter 6

  49. Priority Scheduling i t(pi) Pri 0 350 5 1 125 2 2 475 3 3 250 1 4 75 4 • Reflects importance of external use • May cause starvation • Can address starvation with aging 0 250 375 850 925 1275 p3 p1 p2 p4 p0 TTRnd(p0) = t(p0)+t(p4)+t(p2)+t(p1) )+t(p3) = 350+75+475+125+250 = 1275 TTRnd(p1) = t(p1)+t(p3) = 125+250 = 375 TTRnd(p2) = t(p2)+t(p1)+t(p3) = 475+125+250 = 850 TTRnd(p3) = t(p3) = 250 TTRnd(p4) = t(p4)+ t(p2)+ t(p1)+t(p3) = 75+475+125+250 = 925 W(p0) = 925 W(p1) = 250 W(p2) = 375 W(p3) = 0 W(p4) = 850 Wavg = (925+250+375+0+850)/5 = 2400/5 = 480 Operating Systems: A Modern Perspective, Chapter 6

  50. Deadline Scheduling i t(pi) Deadline 0 350 575 1 125 550 2 475 1050 3 250 (none) 4 75 200 • Allocates service by deadline • May not be feasible 200 550 575 1050 0 1275 p1 p4 p0 p2 p3 p4 p1 p0 p2 p3 p4 p0 p1 p2 p3 Operating Systems: A Modern Perspective, Chapter 6

More Related