1 / 51

Process Management

Process Management. Objectives Explain need of process management Discuss several methods of dealing with deadlocks Discuss starvation Examine several configurations of processors Review classical problems of concurrent processes

hong
Télécharger la présentation

Process Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Management

  2. Objectives • Explain need of process management • Discuss several methods of dealing with deadlocks • Discuss starvation • Examine several configurations of processors • Review classical problems of concurrent processes • Discuss how concurrent processes are handled by multi-processors

  3. Process Management • It is important to note that a process is not a program. A process is a program in execution. • In a single-user system, the processor is busy only when the user is executing a job – at all times it is idle. Processor management in this environment is simple. • When there are many programs on the system (multiprogramming), the processor must be allocated to each job in a fair and efficient manner, which can be a complex task. • Multiprogramming requires that the processor be allocated to each job or process for a period of time and deallocated at an appropriate time. • If the processor is deallocated during a program’s execution, it must be done in such a way that it can be restarted later as easily as possible.

  4. Process management is one of the most important and relevant tasks in operating system design. • In a multiprocessing environment, there is a need for algorithms in order to resolve conflicts between processors to ensure that events occur in the proper order even if they are carried out by several processes. This is known as process synchronization. • A lack of process synchronization can result in two extreme conditions: deadlock or starvation.

  5. A Lack of Process Synchronization Causes Deadlock or Starvation • Deadlock (“deadly embrace”) -- A deadlock is the situation where a group of processes blocks forever because each of the processes is waiting for resources which are held by another process in the group. • The problem builds when the resources needed by a job are held by other jobs also waiting to run but can’t because they’re waiting for other unavailable resources. The jobs come to a standstill. • The deadlock is complete if remainder of system comes to a standstill as well.

  6. Deadlock is more serious than indefinite postponement or starvation because it affects more than one job. • Because resources are being tied up, the entire system (not just a few programs) is affected. • Requires outside intervention (e.g., operators or users terminate a job).

  7. Deadlocks can happen when several processes request, and hold on to, dedicated devices while other processes act in a similar manner.

  8. DEADLOCK CHARACTERISATION NECESSARY FOUR CONDITIONS OF A DEADLOCK ALL of these four must happen simultaneously for a deadlock to occur: Mutual exclusion -- the act of allowing only one process to have access to a dedicated resource. Resource holding -- the act of holding a resource and not releasing it; waiting for the other job to retreat. No preemption -- the lack of temporary reallocation of resources; once a job gets a resource it can hold on to it for as long as it needs. Circular wait -- each process involved in impasse is waiting for another to voluntarily release the resource so that at least one will be able to continue. Process A waits for Process B waits for Process C .... waits for Process A.

  9. Let’s review the staircase example and identify the four conditions required for a deadlock. • When two people meet between landings, they couldn’t pass because the steps can hold only one person at a time. • Mutual exclusion, the act of allowing one person (process) to have access to a step (a dedicated resource), is the first condition for deadlock. • When two people meet on the stairs and each one held ground and waited for the other to retreat, that was an example of resource holding (as opposed to resource sharing), the second condition for deadlock. • In this example, each step was dedicated to the climber (or the descender); it was allocated to the holder for as long as needed. This is called no preemption, the lack of temporary reallocation of resources, and is the third condition for deadlock. • These three lead to the fourth condition, circular wait, in which each person (or process) involved in the standoff is waiting for another to voluntarily release the step (or resource) to allow at least one to continue on and eventually arrive at the destination.

  10. Modeling Deadlocks Using Directed Graphs (Holt, 1972) • Processes represented by circles. • Resources represented by squares. • Solid line from a resource to a process means that process is holding that resource. • Solid line from a process to a resource means that process is waiting for that resource. • Direction of arrow indicates flow. • If there’s a cycle in the graph then there’s a deadlock involving the processes and the resources in the cycle.

  11. Resource Allocation Modelling using Graphs Nodes : resource process Arcs : resource requested : resource allocated :

  12. R1 R1 R1 R2 P1 P1 P1 P2 Directed Graph Examples Figure 5.7 (a) Figure 5.7 (c) Figure 5.7 (b)

  13. R1 R1 R2 R2 R3 R3 R1 R1 R2 R2 R3 R3 P1 P3 P1 P3 P3 P1 P1 P3 P2 P2 P2 P2 Figure 5.8 Figure 5.9

  14. P1 P3 P1 P3 P2 P2    Figure 5.11 (a)    Figure 5.11 (b)

  15. Strategies for Handling Deadlocks There are three methods: • Prevent one of the four conditions from occurring. • Avoid the deadlock if it becomes probable. • Detect the deadlock when it occurs and recover from it gracefully.

  16. Prevention of Deadlock • To prevent a deadlock OS must eliminate 1 out of the 4 necessary conditions. • Same condition can’t be eliminated from every resource. • Mutual exclusion is necessary in any computer system because some resources (memory, CPU, dedicated devices) must be exclusively allocated to 1 user at a time - If you need it, you cannot prevent it!

  17. Prevention of Resource Holding or No Preemption • Hold-and-Wait:Resource holding can be avoided by forcing each job to request, at creation time, every resource it will need to run to completion. This is inefficient: • 1) a process may be held up for a long time, waiting for all its resources request to be filled, while instead it can start with some of its resources; • 2) resources may remain used for a long time, but cannot be used by other processes. • No preemption could be bypassed by allowing OS to deallocate resources from jobs. • OK if state of job can be easily saved and restored. • Bad if preempt dedicated I/O device or files during modification.

  18. Prevention of Circular Wait • Circular waitcan be bypassed if OS prevents formation of a circle. • Require that jobs anticipate order in which they will request resources. • A best order is difficult to determine. • As with Hold-and-Wait prevention, circular wait prevention may be inefficient, slowing down processes and denying resource access unnecessarily.

  19. Deadlock Detection • Deadlock prevention strategies are conservative. They solve the deadlock problem by limiting access to resources and by imposing restrictions on processes. • Deadlock detection strategies do not limit resource access or restrict process actions. Request are granted whenever possible. The OS perform an algorithm to detect the circular wait condition periodically. Once deadlock has been detected, some strategy is needed for recovery. The following are possible approaches: 1. Abort all deadlocked processes. 2. Back up each deadlocked process to some previously defined checkpoint, and restart all processes. 3. Successively abort deadlocked processes until deadlock no longer exists. 4. Successively preempt resources until deadlock no longer exists.

  20. Avoidance • Deadlock avoidance allows the three necessary conditions but makes judicious choices to assure that the deadlock point is never reached. Two approaches to deadlock avoidance: • Do not start a process if its demands might lead to deadlock • Do not grant an incremental resource request to a process if this allocation might lead to deadlock • Dijkstra’s Bankers Algorithm (1965) used to regulate resources allocation to avoid deadlock. • Safe state -- if there exists a safe sequence of all processes where they can all get the resources needed. • Unsafe state -- doesn’t necessarily lead to deadlock, but it does indicate that system is an excellent candidate for one.

  21. Banker’s Algorithm • Based on a bank with a fixed amount of capital that operates on the following principles: • No customer will be granted a loan exceeding bank’s total capital. • All customers will be given a maximum credit limit when opening an account. • No customer will be allowed to borrow over the limit. • The sum of all loans won’t exceed the bank’s total capital. • OS (bank) must be sure never to satisfy a request that moves it from a safe state to an unsafe one. • Job with smallest number of remaining resources < = number of available resources

  22. A Bank’s Safe and Unsafe States Safe Unsafe

  23. Starvation • Starvation -- result of conservative allocation of resources where a single job is prevented from execution because it’s kept waiting for resources that never become available. • “The dining philosophers” Dijkstra (1968). • Avoid starvation via algorithm designed to detect starving jobs which tracks how long each job has been waiting for resources (aging).

  24. Dining Philosophers Problem

  25. Five philosophers are sitting at a round table, each deep in thought, and in the center lies a bowl of spaghetti that is accessible to everyone. • There are forks on the table – one between each philosopher as illustrated in Fig. 6.11. local custom dictates that each philosopher must use two forks, the forks on either side of the plate, to eat the spaghetti, but there are only five forks – not the ten it would require for all five thinkers to eat at once. • When they sit down, Philosopher 1 (P1) is the first to take the two forks on either side of the plate and begins to eat. P3 does likewise. Now P2 decides to begin the meal but is unable to start because no forks are available. • Soon P3 finishes eating, puts down his two forks and resumes pondering.

  26. Should the fork beside him that’s now free be allocated to the hungry philosopher? Although it’s tempting, such a move would be a bad precedent because if the philosophers are allowed to tie up resources with only the hope that the other required resource will become available. The dinner could easily slip into an unsafe state; it would be only a matter of time before each philosopher held a single fork – and nobody could eat. So the resources are allocated to the philosophers only when both forks are available at the same time. • P0 and P4 are quietly thinking and P1 is still eating when P3 who should be full decides to eat some more, and because the resources are free, he is able to take the forks again. Soon thereafter, P1 finishes and releases the forks but P2 is till not able to eat because P3 is using one of the forks. This scenario could continue forever, and as long as P1 and P3 alternate their use of available resources, P2 must wait. P1 and P3 can eat anytime they wish while P2 starves - only inches from nourishment.

  27. In a computer environment, the resources are like forks and the competing processes are like dining philosophers. If the resource manager doesn’t watch for starving processes and jobs and plan for their eventual completion, the jobs could remain in the system forever waiting for the right combination of resources. • To address this problem, an algorithm designed to detect starving jobs can be implemented, which tracks how long each job has been waiting for resources. Once starvation has been detected, the system can block new jobs until the starving jobs have been satisfied. • This algorithm must be monitored closely: if done too often, then new jobs will be blocked too frequently and throughput will be diminished. If it’s not done often enough, starving jobs will remain in the system for an unacceptably long period.

  28. Concurrent Processes

  29. Introduction - Concurrent Processes • Multiprocessing systems have more than one CPU • problems that occur in single processor systems apply to multi-processes in general • single processor with 2 or more processes • more than one processor with multiprocesses

  30. What Is Parallel Processing? • Parallel processing • Multiprocessing • Two or moreprocessors operate in unison • Two or more CPUs execute instructionssimultaneously • Processor Manager • Coordinates activity of eachprocessor • Synchronizes interaction among CPUs

  31. Parallel processing development • Enhancesthroughput • Increases computing power • Benefits • Increased reliability • More than one CPU • If one processor fails, others take over • Not simple to implement • Faster processing • Instructions processed inparallel two or more at a time

  32. Faster instruction processing methods • CPU allocated to each program or job • CPU allocated to each working set or parts of it • Individual instructions subdivided • Each subdivision processed simultaneously • Concurrent programming • Two major challenges • Connecting processors into configurations • Orchestrating processor interaction • Example: six-step information retrieval system • Synchronization is key

  33. Typical Multiprocessing Configurations • Much depends on how the multiple processors are configured within the system. • Three types of typical configurations: • Master/slave • Loosely coupled • Symmetric

  34. Master/Slave Configuration • Asymmetric multiprocessing system • Single-processor system • Additional slave processors • Each managed by primary master processor • Master processor responsibilities • Manages entire system • Maintains all processor status • Performs storage management activities • Schedules work for other processors • Executes all control programs

  35. Advantage • Simplicity • Disadvantages • Reliability • No higher than single processor system • If the master processor fails, the entire system fails • Potentially poor resources usage • If slave processor is free while master processor busy, the slave must wait until master becomes free and can assign work to it • Increases number of interrupts • Master processor interrupted every time slave processor needs OS intervention • Creates long queues at master-processor level

  36. Loosely Coupled Configuration • Several complete computer systems • Each with own processor - controls its own resources • Maintains commands and I/O management tables • Independent single-processing difference • Each processor • Communicates and cooperates with others • Has global tables • Several requirements and policies for job scheduling • Singleprocessor failure • Others continue work independently • Difficult to detect

  37. Symmetric Configuration • Best implemented if the processors are of the same type • Advantages (over loosely coupled configuration) • More reliable • Uses resources effectively • Can balance loads well • Can degrade gracefully in failure situation • Most difficult to implement • Requires well synchronized processes • Avoids races and deadlocks

  38. Decentralized process scheduling • Single operating system copy • Global table listing each process and its status – every processor has access to it • Interrupt processing • Update corresponding process list • Run another process • More conflicts • Several processors access same resource at same time • Process synchronization • Algorithms resolving conflicts between processors

  39. Process Cooperation • Several processes work together to complete common task • Each case requires • Mutual exclusion and synchronization • Absence of mutual exclusion and synchronization • Results in problems • Examples • Producers and consumers problem • Readers and writers problem • Each case implemented using semaphores

  40. Classical Problems of Synchronization • Readers and Writers Problem • Dining-Philosophers Problem

  41. Readers and Writers • An object is shared among many threads, each belonging to one of two classes: • Readers: read data, never modify it • Writers: read data and modify it • The problem consists of readers and writers that share a data resource. • The readers only want to read from the resource, the writers want to write to it. • There is no problem if two or more readers access the resource simultaneously. • However, if a writer and a reader or two writers access the resource simultaneously, the result becomes indeterminable. • Therefore the writers must have exclusive access to the resource.

  42. Readers and Writers • A practical example of a Readers and Writers problem is an airline reservation system consisting of a huge data base with many processes that read and write the data. • Reading information from the data base will not cause a problem since no data is changed. • The problem lies in writing information to the data base. • If no constraints are put on access to the data base, data may change at any moment. • By the time a reading process displays the result of a request for information to the user, the actual data in the data base may have changed. What if, for instance, a process reads the number of available seats on a flight, finds a value of one, and reports it to the customer. Before the customer has a chance to make their reservation, another process makes a reservation for another customer, changing the number of available seats to zero.

  43. Concurrent Programming • Concurrent processing system • One job uses several processors • Executessets of instructions in parallel • Requires programming language and computer system support

  44. Application of Concurrent Programming • Monoprogramming languages instructions are executed one at a time sufficient for most computational purposes • easy to implement • fast enough for most users • By using a language that allows concurrent processing, arithmetic expressions can be processed differently

  45. Application of Concurrent Programming • When operations are performed at the same time, we increase computation speed, but also create complexity of the program language & hardware. • Explicit parallelism • detects which instructions can be executed in parallel • Implicit parallelism • automatic detection by the compiler of instructions that can be performed in parallel

  46. Dining Philosopher Problem • The Dining Philosophers problem is intended to illustrate the complexities of managing shared state in a multithreaded environment. • Here's the problem: • At a round table sit five philosophers who alternate between thinking and eating from a large bowl of rice at random intervals.

More Related