1 / 24

Processes

Processes. CPU activities - running programs/jobs/processes Program - passive entity (code, possibly data) Process - an active entity - a program in the course of execution including register values (program counter, flags, etc…) stack contents (temporary variables, procedure calls)

palani
Télécharger la présentation

Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes • CPU activities - running programs/jobs/processes • Program - passive entity (code, possibly data) • Process - an active entity - a program in the course of execution including • register values (program counter, flags, etc…) • stack contents (temporary variables, procedure calls) • data section (values of current local and global vars) • Even though 2 processes may be of the same program, their active values can differ

  2. Process Control Block • OS represents each process by this information: • Process state (new, running, waiting, ready, halted) • Program counter (address of next instruction) • CPU registers (accumulator, index, stack ptr, flags) • CPU scheduling info (priority, position in queue) • Memory-management info (base, limit reg, page table) • Accounting info (CPU time, real time, limits, acct #, etc) • I/O status info (I/O devices needed, open files, etc) • See figures 4.1 for process state example and 4.2 for PCB example (pages 90-91)

  3. Scheduling • Objective in multiprogramming is to keep CPU busy • always have a process running • If more than 1 process • OS must schedule which process to execute • Scheduling Queues (see fig 4.5 p. 95) • Job queue (all processes to be executed) • Ready queue (processes waiting for the CPU) • I/O queue(s) (processes requiring particular I/O device) • Schedulers - different algorithms to schedule processes - we will look at these in chapter 5

  4. Context Switch • CPU switching from one process to another • Requires saving old process state somewhere (usually with the process in a queue) and loading new process state • Context switching is pure overhead (no process execution occurs during the switch) • Often uses hardware support to speed it up (especially if it only requires register switching) • Could take between 1 and 1000 microseconds and is often a system bottleneck • See example in figure 4.3 p. 92

  5. Process Creation • OS requires a method of dynamic process creation • Creating a new user process requires a system call • Creating process is the parent, created process it the child -- this creates a tree of processes (see figure 4.7, page 98) • Child processes may acquire resources directly from the OS or may be constrained to resources of the parent process • Parent may continue concurrently or wait until child terminates -- child may be a duplicate of parent • In Unix, fork() is a system call to create a copy of a process (with a unique process id) • Some OS’s use specific address locations for a given type of process, others are more flexible

  6. Process Termination • When a process ends, it sends an exit system call to the OS to be deleted • The process may then transmit data to its parent process • All resources of terminating process are then deallocated • Parent may terminate a child process under certain situations (exceeded resources, no longer required, parent is terminating) • In VMS, children cannot exist if the parent has terminated

  7. Cooperating Processes • Processes that affect or are affected by other processes of the system • Reasons for cooperating processes: • information sharing • computational speedup (parallelize a task) • modularity • convenience (a user may have several processes pertaining to one project in several different states) • Cooperating processes must be synchronized

  8. Consumer-Producer problem • A process that uses something created by another process is a consumer • A process that produces something for another process is a producer • Examples • compiler produces assembly code, assembler uses this and produces object code, loader uses this and produces executable code • Print program creates postscript file for print driver • Consumers and producers must be synchronized so that information is available for the consumer when it is needed

  9. Bounded and Unbounded Buffers • Consider a consumer who retrieves info from a buffer and a producer that places info in the same buffer • Unbounded buffer • no limit on how much can be placed there (e.g. a linked list) but cannot be empty for a consumer to use it • Bounded buffer • limit on the amount that can be placed there restricting both consumer (can’t be empty) and producer (can’t be full) • 0 capacity buffer • buffer which cannot store anything -- in this case, the consumer and producer communicate directly via a message

  10. Implementing Bounded buffer • Producer: Consumer: • repeat repeat ... … produce an item in nextp while in=out do no-op; … nextc:=buffer[out]; while in+1 mod n=out out:=out+1 mod n; do no-op; … buffer[in]:=nextp; consume item in nextc in:=in+1 mod n; … until false; until false;

  11. Threads • A process is defined by the resources it uses and its current execution status (PC, registers) • There are situations where it is useful to share resources concurrently • A thread (lightweight process) is a process which shares its code section, data section and OS resources (like files) with other processes but has its own PC/register values, stack space

  12. Threads vs. Processes • Process is now a heavyweight process or a task with 1 thread • A task with several threads is a set of processes which are partially shared and partially private • Threads may be user-level threads or kernel-level threads (which are less efficient but more flexible)

  13. Example • Consider a process, such as telnet • Open a telnet process to computer1 • After connection is established, Open a telnet process to computer2 • If these were processes • some systems (such as non-virtual memory OS’s) may not allow two of the same processes running at the same time • If these were processes and one blocked • then the other might block • As threads • they would share the same code and data but use different registers and would not block each other

  14. Context-Switching of Threads • Because threads share the same code and data and only differ in registers, a context switch between two threads is fast and simple • just switch register sets • this is often performed in hardware, not software • Context Switching of two processes may also require saving or moving data in memory and is slower

  15. User-level threads potentially thousands, needs only a small data structure and a stack Intermediate-level of threads are lightweight processes switching between these is slow because of the need to move info in memory Kernel-level threads which contain a small data structure and a small stack, switching between these is fast and easy User tasks are composed of user-threads fast switching which are realized as lightweight-processes which themselves invoke kernel-level threads fast switching See figure 4.9 p. 107 Example: Solaris 2

  16. Interprocess Communications • Processes may need to communicate with each other especially if there is need for synchronization • Processes can communicate via a buffer (I.e. all related processes share buffers) or • By some interprocess communications (IPC) supplied by the OS in the form of message passing

  17. Communications Mechanism • Most IPC facilities use a message-passing system where one process issues a send(message) and the other issues a receive(message) • This creates a communications link • How are links established? • Can a link be associated with more than 2 processes? • How many links can exist between 2 processes? • What is the capacity of the link? • Are messages fixed or variable sized? • Is the link unidirectional or bidirectional?

  18. Direct Communications • Send(P, message) and receive(Q, message) where P and Q denote the process names • Link is established automatically by the OS -- the processes need only know the names • Link is associated with exactly 2 processes and there is exactly 1 link between the two • Link is usually bidirectional (but not at the same time)

  19. Asymmetric Direct Communication • In Symmetric Communication, both processes need to know both names • In Asymmetric, only the sending process need know the name of the receiver • The receiver specifies receive(id, message) where id is determined by the OS when the message is received • In either situation, the sender needs to know the name and this may require recompilation if the sender wants to send to a different process

  20. Indirect Communications • OS creates a shared mailbox used to link two processes together -- send(A, message) and receive(A, message) where A is the mailbox • Or, A might be a variable in shared memory • Mailbox sharing allows for multiple process communication rather than 2 • Owner of mailbox is a process (usually explicitly declared). Only owners can receive messages whereas anyone can send a message

  21. Buffering • Communications can be performed through a queue of links. These queues can have different lengths: • 0 capacity - this requires synchronization between the processes (called a rendezvous) where the sender will be blocked until synch • bounded - finite length queue where sender is blocked if queue is full and receiver is blocked if queue is empty • unbounded - unlimited length, sender is never blocked

  22. Message Responses • Rather than using send, a process might use reply requiring an acknowledgement • In such a situation, the sender is blocked until the acknowledgement is received • This aids reliability where lost messages can be determined and resent

  23. Exception Conditions • When a failure occurs, some special handling is needed. This is true of any situation including message passing • Process Q has terminated but P is still waiting for a message, P waits forever • P may send a message to Q which has terminated. No problem unless P needs a response • OS might be able to detect such situations using timeouts or other mechanisms • In such a situation, OS might be responsible for updating waiting process

  24. Lost and Scrambled Messages • Lost message • IPC that was never received • detected by having sender time stamp the message and keep a copy • if, after some time, receiver has not responded, then message is lost and sender resends • Scrambled message • message received but error codes used to determine message was not received correctly • receiver responds with a request for resending, sender resends • See Mach and WinNT examples on p. 116-119

More Related