1 / 49

Processes, Schedulers, Threads

Processes, Schedulers, Threads. Sorin Manolache sorma@ida.liu.se. Last on TTIT61. The OS consists of A user interface for controlling programs (starting, interrupting) A set of device drivers for accessing the hardware

zuri
Télécharger la présentation

Processes, Schedulers, Threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes, Schedulers, Threads Sorin Manolache sorma@ida.liu.se

  2. Last on TTIT61 • The OS consists of • A user interface for controlling programs (starting, interrupting) • A set of device drivers for accessing the hardware • A set of system calls as a program interface to hardware (and not only, we’ll see later) • Process scheduler that schedules process execution and manages the process state • Memory management • File system • Others

  3. Lecture Plan • What is an operating system? What are its functions? Basics of computer architectures. (Part I of the textbook) • Processes, threads, schedulers (Part II , chap. IV-VI) • Synchronisation (Part II, chap. VII) • Primary memory management. (Part III, chap. IX, X) • File systems and secondary memory management (Part III, chap. XI, XII, Part IV) • Security (Part VI)

  4. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  5. What Is a Process? • In the previous lecture, I’ve used “processes” and “programs” interchangeably hoping you will not notice • A program is a passive entity, we use it to refer to the executable file on the disk (or memory stick, etc., from where it is eventually loaded in the main memory) • Definition: • A process is an active entity, it is an executing program, it is an instance of the program • We can have several processes of the same program

  6. Memory Layout of a Process • The text segment contains the code of the program • The data segment contains the global variables • The stack segment contains the return addresses of function calls, and in most OS the local variables • The heap contains the dynamically allocated memory. It is typically at the other end of the stack segment. Text (code) Memory Data Stack

  7. The Data Segment(s) • A process may contain several data segments: • Read-only data: printf(“%d\n”, i) • “%d\n” is read-only • Initialised data: int a = 20; • 20 goes into the executable file on the disk. When the program is loaded into memory, the memory location corresponding to ‘a’ is initialised with 20 • Uninitialised data: int b; • No space for ‘b’ is reserved for the executable file on the disk. It is just specified in the executable file header that the uninitialised data segment is X bytes long. When the program is loaded into memory, a segment of X bytes is reserved for uninitialised data.

  8. The Stack Segment rlt = 6 int fact(int n) { int xp; 1000: if (n == 0) 1001: return 1; 1002: xp = fact(n – 1); 1003: xp *= n; 1004: return xp; } main() { int rlt; 1005: rlt = fact(3); 1006: printf(“%d\n, rlt); } 3 main 1006 xp = 2 2 fact 1003 xp = 1 1 fact 1003 xp = 1 0 fact 1003 xp fact

  9. Process Execution • A register of the CPU, the program counter (PC), contains the address of the next instruction to execute (it points in the code segment) • Another register of the CPU, the stack pointer (SP), contains the address of the of the top of the stack (it points in the stack segment)

  10. Segment Sharing • Can two processes of the same program share the code segment? What would we gain/lose if yes? • Can two processes, not necessarily of the same program, share the data segment? Why would we (not) want that? • Can two processes, not necessarily of the same program, share the stack segment?

  11. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  12. Process States New preemption admitted Ready Running dispatch exit I/O, event completion I/O, wait Waiting Terminated

  13. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  14. Process Control Block (PCB) • Is a memory area in the OS kernel memory • One for each process • Contains the data needed by the OS in order to manage the process to which the PCB corresponds • It is also called the context of the process • When the OS switches the process that runs on the CPU, we say that it performs a context switch

  15. Contents of the PCB • Program counter value, stack pointer value, and the value of all other registers • Memory management information (base + limit registers, translation tables) • CPU scheduling information (process priority, pointers to scheduling queues) • Accounting information (CPU time used, real time used, etc) • I/O status (devices allocated to the process, list of open files, etc)

  16. Context Switch Process A Process B A running Save state of A into PCBA Context switch Load state of B into PCBB B running Save state of B into PCBB Context switch Load state of A into PCBA A running

  17. Ready queue I/O request I/O queue Ready Queues CPU I/O Time slice expired Interrupt occurs Wait for interrupt

  18. Scheduling • The scheduler is the routine that selects a process from the ready queue. This is the short-term scheduler. It runs at least one time every 100ms.

  19. Long-Term Scheduler • Degree of multiprogramming: the number of processes in memory at the same time • If this degree is stable  number of newly created processes over a time interval is roughly equal to the number of processes that terminated in the same interval • A long-term scheduler runs whenever a processes terminates in order to decide which new process to bring in the memory • Has to select an appropriate process mix • Too many CPU-bound processes devices under-utilised, process execution times much longer than if the mix was more balanced • Too many device-bound processes under-utilised CPU

  20. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  21. Operations on Processes • Creation: system call called fork in Unix • Termination: system call called exit • Loading of new process image (code, data, stack segments): system call called exec in Unix • Waiting for the termination of a child: system call called wait or waitpid in Unix • man –s 2 fork/exit/exec/wait/waitpid

  22. Fork • Fork creates a “clone” of the invoking process. The invoking process will be the parent, and the “clone” will be the child. One child has exactly one parent, a parent may have 0 or more children • The child inherits the resources of the parent (set of open files, scheduling priority, etc.)

  23. CoW • However, it has its own memory space. Its memory space contains the same data as the memory space of the parent, just that it is a copy. • Parent and child do not share data and stack segments. Each has its own copy that it can modify independently. • Should parent and child have different copies of read-only segments? Do they share the code segment? • Actually, in modern Unixes, data segments are allocated in a lazy manner, i.e. only if one of them starts writing, will the data segment copied. • This lazy copying technique is called “copy-on-write” (CoW)

  24. Code Example pid = fork(); if (pid == 0) { printf(“I am the child. My ID is %d and my parent’s ID is %d\n”, getpid(), getppid()); execlp(“/bin/ls”, “ls”, “-l”, “/home/TTIT61”, 0); exit(0); } else { printf(“I am the parent. My child’s ID is %d\n”, pid); waitpid(pid, &status, 0); }

  25. Co-operating Processes • Parent and child processes have separated memory spaces, it is as if they are not aware that the other process exists. • Sometimes this is not desirable, we would like to pass data from one process to the other • E.g.: • gzip –dc nachos-3.4.tar.gz | tar xf – • Mail composer + spell checker

  26. Inter-Process Communication Mechanisms • Pipes (gzip –dc nachos-3.4.tar.gz | tar xf -) • Signals (kill -9 pid) • Message queues • Semaphores, condition variables, locks, etc. • Shared memory segments • Network sockets (http, ftp, X windows, etc.)

  27. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  28. Context Switch Process A Process B A running Context switch Performance bottleneck B running Context switch A running

  29. Threads • Also known as lightweight processes • The do share the data segment • Do they share the stack segment?

  30. Single vs. Multi-Threaded Processes code data files code data files stack registers stack registers stack registers

  31. Advantages of Threads • Resource sharing (memory segments) • Faster creation and destruction (30 times on Solaris 2)  application is much more responsive • Faster context switch (5 times on Solaris 2)

  32. User Threads and Kernel Threads • Kernel threads: threads that are visible by the OS • They are a scheduling unit • Thread creation, scheduling, management is done in kernel space  slightly slower than user threads • If a kernel thread blocks (on I/O, for example), the kernel is able to schedule a different kernel thread or process  rather efficient • User threads: implemented by a thread library at the user level • They are not a scheduling unit • Creation, scheduling, management is done by the user (library) faster than kernel threads • If a user thread blocks, all user threads belonging to the scheduling unit (encapsulating process) block

  33. Multi-Threading Models • Many-to-one User threads k

  34. Multi-Threading Models • One-to-one User threads k k k

  35. Multi-Threading Models • Many-to-many User threads k k

  36. Threading Issues • Fork and exec? • Should the child process be multi-threaded, or should only the calling thread be cloned in a new process? • exec invoked by one thread replaces the entire process • Signals? Which thread should get the signal?

  37. Outline • The concept of “process” • Memory layout of a process • Process state and state transition diagram • Process Control Blocks • Context switches • Operations on processes (create, terminate, etc.) • Threads • Motivation, user vs. kernel • Multi-threading models, threading issues • CPU scheduling • Criteria, algorithms

  38. CPU Scheduling • Why scheduling? • For using resources efficiently • I/O is very much slower than the CPU (CPUs run at billions of instructions per second, hard disk and network accesses take milliseconds) • When a process makes a I/O request, it has to wait. In this time, the CPU would idle if the OS did not schedule a ready process on it.

  39. Non-Preemptive vs. Preemptive • If a scheduling decision is taken only when a process terminates or moves to the waiting state because of the unavailability of a resource, the scheduling is non-preemptive (Windows 3.1, Apple Macintosh) • If a scheduling decision is taken also when a process becomes ready to execute (moves from waiting or running state to ready state), the scheduling is preemptive

  40. Non-Preemptive vs. Preemptive • Non-preemptive scheduling requires no hardware support (timer). The OS is also less complex. • Preemptive leads to shorter response times. • However, operations by the kernel on some data have to be performed atomically (i.e. without being preempted while in the middle of managing that data) in order to avoid data inconsistancies • A common Unix solution is preemptive scheduling of processes and non-preemptable system calls. • However, the problem persists because of interrupts from the hardware, which may not be ignored. • Either disable interrupts or, better, fine-grained locking

  41. Dispatcher • Once the new process to run is selected by the scheduler, the dispatcher • Stops the currently running process (if any) • Switches context • Switches to user mode • Jumps to the proper location in the user program to restart it • The time it takes the dispatcher to do that is the dispatch latency

  42. Scheduling Criteria • CPU utilisation – keep it as busy as possible • The load can be between 0 and 100%. ‘uptime’ in Unix indicates the average number of ready processes in the ready queue. Therefore it may be > 1 • Throughput – number of finished processes per time unit • Turnaround time – length of the time interval between the process submission time and finishing time of a process • Waiting time – length of time spent waiting in the ready queue • Response time – length of the time interval between the process submission time and the production of the first results

  43. First-Come First-Served • Simple • Non-preemptive • Non-minimal waiting times • Convoy effect

  44. Shortest Job First • Optimal w.r.t. waiting time • How could we know the length of the next CPU burst? • Take the user-specified maximum CPU time • Cannot be implemented as the short-term scheduling algorithm. We cannot know the length of the next CPU burst. We can predict it with various methods (see exponential average, textbook, section 6.3.2)

  45. Priority Scheduling • Processes are given priorities offline, not by the OS • More flexible in the sense that priorities capture aspects such as importance of the job, (financial) reward, etc. • Starvation – low priority processes never get to the CPU • Can be countered by aging, i.e. slowly modifying the priorities of processes that waited for a long time

  46. Round-Robin Scheduling • Used in time-sharing systems • Every time quantum (10—100ms), a new processes from the ready queue is dispatched • The old one is put at the tail of the ready queue • If the time quantum is very small, we get processor sharing, i.e. each of the n processes have the impression that they run alone on an n times slower processor  too many context switches • Average waiting time is rather long

  47. Multi-Level Queue Scheduling • Processes are assigned to different queues, based on some properties (interactive or not, memory size, etc.) • There is a scheduling policy between queues, and a scheduling policy for each queue System processes High priority Interactive processes Interactive editing processes Batch processes Student processes Low priority

  48. Further Reading • Operations on processes (man pages, http://www.linuxhq.com/guides/LPG/node7.html) • Signals in Unix (man signal, man sigaction) • Pthreads (man pthreads) • Multi-processor scheduling (section 6.4) • Real-time scheduling (section 6.5)

  49. Summary • Processes are executing programs • Kernel manages them, their state, associated data (open files, address translation tables, register values, etc.) • Threads are lightweight processes, i.e. they share data segments, are cheaper to spawn and terminate • Scheduler selects next process to run among the ready ones • Various scheduling algorithms and criteria to evaluate them

More Related