1 / 179

Chapter 6, Process Synchronization, Overheads, Part 1

Chapter 6, Process Synchronization, Overheads, Part 1. Fully covering Chapter 6 takes a lot of overheads. Not all of the sections in the book are even covered. Only the first sections are covered in these overheads, Part 1 These sections are listed on the next overhead

gabi
Télécharger la présentation

Chapter 6, Process Synchronization, Overheads, Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6, Process Synchronization,Overheads, Part 1

  2. Fully covering Chapter 6 takes a lot of overheads. • Not all of the sections in the book are even covered. • Only the first sections are covered in these overheads, Part 1 • These sections are listed on the next overhead • The rest of the sections are covered in the second overheads file, Part 2

  3. 6.1 Background • 6.2 The Critical Section Problem • 6.3 Peterson’s Solution • 6.4 Synchronization Hardware • 6.5 Semaphores

  4. 6.1 Background • Cooperating processes can affect each other • This may result from message passing • It may result from shared memory space • The general case involves concurrent access to shared resources

  5. This section illustrates how uncontrolled access to a shared resource can result in inconsistent state • In other words, it shows what the concurrency control or synchronization problem is

  6. The following overheads show: • How producer and consumer threads may both change the value of a variable, count • How the single increment or decrement of count is not atomic in machine code • How the interleaving of machine instructions can give an incorrect result

  7. High level Producer Code • while(count == BUFFER_SIZE) • ; // no-op • ++count; • buffer[in] = item; • in = (in + 1) % BUFFER_SIZE

  8. High Level Consumer Code • while(count == 0) • ; // no-op • --count; • Item = buffer[out]; • out = (out + 1) % BUFFER_SIZE

  9. Machine Code for Incrementing count • register1 = count; • register1 = register1 + 1; • count = register1;

  10. Machine Code for Decrementing count • register2 = count; • register2 = register2 - 1; • count = register2;

  11. The following overhead shows an interleaving of machine instructions which leads to a lost increment

  12. Let the initial value of count be 5 • S0: Producer executes • register1 = count ( register1 = 5) • S1: Producer executes • register1 = register1 + 1 ( register1 = 6) • Context switch • S2: Consumer executes • register2 = count ( register2 = 5) • S3: Consumer executes • register2 = register2 – 1 ( register2 = 4) • Context switch • S4: producer executes • count = register1( count = 6) • Context switch • S5: consumer executes • count = register2( final value of count = 4)

  13. The point is that you started with a value of 5. • Then two processes ran concurrently. • One attempted increment the count. • The other attempted to decrement the count. • 5 + 1 – 1 should = 5 • However, due to synchronization problems the final value of count was 4, not 5.

  14. Term: • Race Condition. • Definition: • This is the general O/S term for any situation where the order of execution of various actions affects the outcome • For our purposes it refers specifically to the case where the outcome is incorrect, i.e., where inconsistent state results

  15. The derivation of the term “race” condition: • Execution is a “race”. • In interleaved actions, whichever sequence finishes first determines the final outcome. • Note in the concrete example that the one that finished first “lost”.

  16. Process synchronization refers to the tools that can be used with cooperating processes to make sure that that during concurrent execution they access shared resources in such a way that a consistent state results • In other words, it’s a way of enforcing a desired interleaving of actions, or preventing an undesired interleaving of actions.

  17. Yet another way to think about this is that process synchronization reduces concurrency somewhat, because certain sequences that might otherwise happen are not allowed, and at least a partial sequential ordering of actions may be required

  18. 6.2 The Critical Section Problem • Term: • Critical section. • Definition: • A segment of code where resources common to a set of threads are being manipulated • Note that the definition is given in terms of threads because it will be possible to concretely illustrate it using threaded Java code

  19. Alternative definition of a critical section: • A segment of code where access is regulated. • Only one thread at a time is allowed to execute in the critical section • This makes it possible to avoid conflicting actions which result in inconsistent state

  20. Critical section definition using processes: • Let there be n processes, P0, …, Pn-1, that share access to common variables, data structures, or resources • Any segments of code where they access shared resources are critical sections • No two processes can be executing in their critical section at the same time

  21. The critical section problem is to design a protocol that allows processes to cooperate • In other words, it allows them to run concurrently, but it prevents breaking their critical sections into parts and interleaving the execution of those parts

  22. Once again recall that this will ultimately be illustrated using threads • In that situation, no two threads may be executing in the same critical section of code at the same time • For the purposes of thinking about the problem, the general structure and terminology of a thread with a critical section can be diagrammed in this way:

  23. while(true) • { • entry section • // The synchronization entrance • protocol is implemented here. • critical section • // This section is protected. • exit section • // The synchronization exit protocol • is implemented here. • remainder section • // This section is not protected. • }

  24. Note the terminology: • Entry section • Critical section • Exit section • Remainder section • These terms for referring to the parts of a concurrent process will be used in the discussions which follow

  25. A correct solution to the critical section problem has to meet these three conditions: • Mutual exclusion • Progress • Bounded waiting • In other words, an implementation of a synchronization protocol has to have these three characteristics in order to be correct.

  26. Mutual exclusion • Definition of mutual exclusion: • If process Pi is executing in its critical section, no other process can be executing in its critical section • Mutual exclusion is the heart of concurrency control. • However, concurrency control is not correct if the protocol “locks up” and the program can’t produce results. • That’s what the additional requirements, progress and bounded waiting, are about.

  27. Progress • Definition of progress: • If no process is in its critical section and some processes wish to enter, only those not executing in their remainder sections can participate in the decision • It should not be surprising if you find this statement of progress somewhat mystifying. • The idea requires a little explanation and may not really be clear until an example is shown.

  28. Progress explained • For the sake of discussion, let all processes be structured as infinite loops • If mutual exclusion has been implemented, a process may be at the top of the loop, waiting for the entry section to allow it into the critical section • The process may be involved in the entry or exit protocols • Barring these possibilities, the process can either be in its critical section or in its remainder section

  29. The first three possibilities, waiting to enter, entering, or exiting, are borderline cases. • Progress is most easily understood by focusing on the question of processes either in the critical section or in the remainder section • The premise of the progress condition is that no process is in its critical section

  30. Some processes may be in their remainder sections • Others may be waiting to enter the critical section • But the important point is that the critical section is available • The question is how to decide which process to allow in, assuming some processes do want to enter the critical section

  31. Progress states that a process that is happily running in its remainder section has no part in the decision of which process to allow into the critical section. • A process in the remainder section can’t stop another process from entering the critical section. • A process in the remainder section also cannot delay the decision.

  32. The decision on which enters can only take into account those processes that are currently waiting to get in. • This sounds simple enough, but it’s not really clear what practical effect it has on the entry protocol. • An example will be given later that violates the progress condition. • Hopefully this will make it clearer what the progress condition really means.

  33. Bounded waiting • Definition of bounded waiting: • There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a given process has made a request to enter its critical section and when that request is granted

  34. Bounded waiting explained • Whatever algorithm or protocol is implemented for allowing processes into the critical section, it cannot allow starvation • Granting access to a critical section is reminiscent of scheduling. • Eventually, everybody has to get a chance

  35. For the purposes of this discussion, it is assumed that each process is executing at non-zero speed, although they may differ in their speeds • Bounded waiting is expressed in terms of “a number of times”. • No concrete time limit can be given, but the result is that allowing a thread into its critical section can’t be postponed indefinitely

  36. More about the critical section problem • The critical section problem is unavoidable in operating systems • The underlying idea already came up in chapter 5 in the discussion of preemption and interrupt handling • There are pieces of operating system code that manipulate shared structures like waiting and ready queues. • No more than one process at a time can be executing such code because inconsistent O/S state could result

  37. You might try to avoid the critical section problem by disallowing cooperation among user processes (although this diminishes the usefulness of multi-programming) • Such a solution would be very limiting for application code. • It doesn’t work for system code. • System processes have to be able to cooperate in their access to shared structures.

  38. A more complete list of shared resources that multiple system processes contend for would include scheduling queues, I/O queues, lists of memory allocation, lists of processes, etc. • The bottom line is that code that manipulates these resources has to be in a critical section • Stated briefly: There has to be mutual exclusion between different processes that access the resources (with the additional assurances of progress and bounded waiting)

  39. The critical section problem in the O/S, elaborated • You might try to get rid of the critical section problem by making the kernel monolithic (although this goes against the grain of layering/modular design) • The motivation behind this would be the idea that there would only be one O/S process or thread, not many processes or threads.

  40. Even so, if the architecture is based on interrupts, whether the O/S is modular or not, one activation of the O/S can be interrupted and set aside, while another activation is started as a result of the interrupt. • The idea behind “activations” can be illustrated with interrupt handling code specifically.

  41. One activation of the O/S may be doing one thing. • When the interrupt arrives a second activation occurs, which will run a different part of the O/S—namely an interrupt handler

  42. The point is that any activation of the O/S, including an interrupt handler, has the potential to access and modify a shared resource like the ready queue • It doesn’t matter whether the O/S is modular or monolithic • Each activation would have access to shared resources

  43. Do you own your possessions, or do your possessions own you? • The situation can be framed in this way: • You think of the O/S as “owning” and “managing” the processes, whether system or user processes • It turns out that in a sense, the processes own the O/S.

  44. Even if the processes in question are user processes and don’t access O/S resources directly, user requests for service and the granting of those requests by the O/S causes changes to the O/S’s data structures • The O/S may create the processes, but the processes can then be viewed as causing shared access to common O/S resources

  45. This is the micro-level view of the idea expressed at the beginning of the course, that the O/S is the quintessential service program • Everything that it does can ultimately be traced to some application request • Applications don’t own the system resources, but they are responsible for O/S behavior which requires critical section protection

  46. A multiple-process O/S • The previous discussion was meant to emphasize that even a monolithic O/S has concurrency issues • As soon as multiple processes are allowed, whether an O/S is a microkernel or not, it is reasonable to implement some of the system functionality in different processes

  47. At that point, whether the O/S supports multi-programming or not, the O/S itself has concurrency issues with multiple processes of its own • Once you take the step of allowing >1 concurrent process, whether user or system processes, concurrency, or the critical section problem arises

  48. What all this means to O/S code • O/S code can’t allow race conditions to arise • The O/S can’t be allowed to enter an inconsistent state • O/S code has to be written so that access to shared resources is done in critical sections • No two O/S processes can be in a critical section at the same time

  49. The critical section problem has to be solved in order to implement a correct O/S. • If the critical section problem can be solved in O/S code, then the solution tools can also be used when considering user processes or application code which exhibit the characteristics of concurrency and access to shared resources

  50. Dealing with critical sections in the O/S • Keep in mind: • A critical section is a sequence of instructions that has to be run atomically without interleaving executions of >1 process • Pre-emptive scheduling  a new process can be scheduled before the currently running one finishes

More Related