1 / 43

Inter Process Synchronization and Communication

Content . Parallel computationProcess interactionProcesses unaware of each otherProcesses indirectly aware of each otherProcesses directly aware of each otherCritical sectionsMutual exclusionSoftware approachesDekker AlgorithmPeterson's AlgorithmHardware approachesInterrupt disablingISA

mahon
Télécharger la présentation

Inter Process Synchronization and Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Inter Process Synchronization and Communication CT213 – Computing Systems Organization

    2. Content Parallel computation Process interaction Processes unaware of each other Processes indirectly aware of each other Processes directly aware of each other Critical sections Mutual exclusion Software approaches Dekker Algorithm Peterson’s Algorithm Hardware approaches Interrupt disabling ISA modification Semaphores Mutual exclusion with semaphores Producer Consumer problems Semaphores implementation

    3. Parallel Computation Is made up from multiple, independent parts that are able to execute simultaneously using one part per process or thread In some cases, different parts are defined by one program, but in general they are defined by multiple programs; Cooperation is achieved through logically shared memory that is used to share information and to synchronize the operation of the constituent processes The operating system should provide at least a base level mechanism to support sharing and synchronization among a set of processes

    4. Operating System Concerns Keep track of active processes – done using process control block Allocate and de-allocate resources for each active process (Processor time, Memory, Files, I/O devices) Protect data and resources against unwanted interference from other processes Programming errors done in one process should not affect the stability of other processes in the system Result of process must be independent of the speed of execution of other concurrent processes This includes process interaction and synchronization and it is subject of this presentation

    5. Process Interaction Processes unaware of each other Independent processes that are not intended to work together; i.e. two independent applications want to access same printer or same file; the operating system must regulate these accesses; these processes exhibit competition Processes indirectly aware of each other Processes that are not aware of each other by knowing each other’s process ID, but they are sharing some object (such as an I/O buffer); such processes exhibit cooperation Process directly aware of each other These are processes that are able to communicate to each other by means of process IDs; they are usually designed to work jointly on some activity; these processes exhibit cooperation In reality, the conditions are not as clear as suggested by this slide. Rather, several processes may exhibit aspects of both cooperation and competition. In reality, the conditions are not as clear as suggested by this slide. Rather, several processes may exhibit aspects of both cooperation and competition.

    6. Competition Among Processes for Resources Two or more processes need to access a resource Each process is not aware of the existence of the other Each process should leave the state of the resource it uses unaffected (I/O devices, memory, processor, etc) Execution of one process may affect the execution of the others Two processes wish access to single resource One process will be allocated the resource, the other one waits (therefore slowdown) Extreme case: waiting process may never get access to the resource, therefore never terminate successfully Control problems: Mutual Exclusion Critical sections Only one program at a time is allowed in a critical section i.e. only one process at a time is allowed to send command to the printer Deadlock P1 and P2 with R1 and R2; each process needs to access both resources to perform part of its function; R1 gets allocated to P2 and R2 gets allocated to P1 … both processes will wait indefinitely for the resources … deadlock Starvation P1, P2, P3 and resource R … P1 and P3 take successively access to resource R … P2 may never get access to it, even if there is no deadlock situation … starvation Control of competition involves the operating system. Control of competition involves the operating system.

    7. Cooperation Among Processes by Sharing Processes that interact with other processes without being aware of them i.e. multiple processes that have access to global variables, shared files or databases The control mechanism must ensure the integrity of the shared data Control problems: Deadlock Starvation Mutual exclusion Data coherence Data items are accessed in two modes: read and write Writing must be mutually exclusive to shared resources Critical sections are used to provide data integrity The difference from previous is that the data can be accessed in two modes: read and write, and only the writing operations must be mutually exclusive. The difference from previous is that the data can be accessed in two modes: read and write, and only the writing operations must be mutually exclusive.

    8. Cooperation Among Processes by Communication When processes cooperate by communication, various processes participate in a common effort that links all of the processes Communication provides a way to sync or coordinate the various activities Communication can be characterized by sending/receiving messages of some sort Mutual exclusion is not a control requirement since nothing is shared between process in the act of passing messages Control problems: Possible to have deadlock Each process waiting for a message from the other process Possible to have starvation Two processes sending message to each other while another process waits for a message

    9. Example of Communication by Sharing problems Process 1 KEYBOARD: Services the keyboard interrupts The read characters are placed in a keyboard buffer Process 2 DISPLAY Displays the content of buffer on the monitor Shared resources: Input buffer Characters counter of how many chars in buffer and not displayed A set of protocols and mechanisms are required for: Keeping the integrity and consistency of resources Providing serial access to certain resources Serial access to a resource – on process and only one has the control of the resource at a given moment in time A set of protocols and mechanisms are required for: Keeping the integrity and consistency of resources Providing serial access to certain resources Serial access to a resource – on process and only one has the control of the resource at a given moment in time

    10. KEYBOARD and DISPLAY processes If KEYBOARD process is interrupted after instruction “1” (its registers (context) are saved and restored when gains the control of the CPU back) and the control passed to the process DISPLAY, the value of the counter variable will be altered, the two processes will continue to function improperly from this time forward … Causes for malfunction: The presence of two copies of same counter variable, one in memory and one in AC with different values Parallel execution of the two processes Situations where two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions

    11. Critical Sections Are parts of the code (belonging to a process) that during execution has exclusive control over a system resource It has a well defined beginning and end During the execution of the critical section, the process can’t be interrupted and the control of the processor given to another process that is using same system resource At a given moment, there is just one active critical section corresponding to a given resource A critical section is executed in mutual exclusion mode Critical section examples: Value update for a global shared variable Modification of a shared database record Printing to a shared printer

    12. Mutual Exclusion If we could arrange matters so no two processes were ever in their critical regions at the same time, we could avoid race conditions This requirement avoids race conditions, but is not sufficient for having parallel processes cooperate correctly and efficiently using shared data. Four conditions to achieve good results: No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed and number of processors) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormally terminated outside of the critical section) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time)

    13. Mutual Exclusion While one process is busy updating shared memory (or modifying a shared resource) in its critical region, no other process will enter its critical region and cause trouble There are a number of different approaches: Software approaches Dekker Algorithm Peterson’s Algorithm Hardware approaches Interrupt disabling ISA modification

    14. Dekker’s Algorithm – First Attempt turn – global memory location that both processes could access; each process examines the value of turn, if it is equal to the number of process, then the process could proceed to the critical section This procedure is known as busy waiting, since the processes are waiting (doing nothing productive but taking processor) to get the access to critical section Mutual exclusion conditions: Satisfied Unsatisfied – processes can be executed alternatively only, thus, the rhythm of execution is given by the slower process Unsatisfied – abnormal termination of one process determines blocking of the other Satisfied No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time)

    15. Dekker’s Algorithm – Second Attempt The problem with first attempt is that it stores the name of the process that may enter its critical section, when in fact we need state information about both processes Each process has to have its key to the critical section, so if one process fails, the other could still have access to the critical section Mutual exclusion conditions: Unsatisfied – if process proc1 is getting interrupted right before have set p1use = TRUE, and the second process takes over the processor, then at one stage both processes will have access to the critical section Satisfied Satisfied if the processes are not failing in the critical section or during setting the flags Satisfied No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time)

    16. Dekker’s Algorithm – Third Attempt Because one process could change its state after the other checked it, both processes could go in the critical section, so the second attempt failed Perhaps we could change this with just changing the position of two statements … Mutual exclusion conditions: Satisfied Satisfied Satisfied if the processes are not failing in the critical section or during setting the flags; Unsatisfied otherwise (the other process is blocking) Unsatisfied – if both processes set their flags to true before either of them has executed the while statement, then each will think that the other has entered its critical section, causing deadlock No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time)

    17. Dekker’s Algorithm – Forth Attempt Third algorithm failed because deadlock occurred as result of irreversibly change of each process’s state to TRUE before actually having the test done. No way back off from this position Try to fix this by having each process to indicate its desire to enter the critical section, but it is prepared to defer to the other process Mutual exclusion conditions: Satisfied Unsatisfied – if the processes are executing with exact speed, then neither of the processes will enter the critical section Satisfied if the processes are not failing in the critical section or during setting the flags; Unsatisfied otherwise (the other process is blocking) Satisfied No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time)

    18. Dekker’s Algorithm – Correct Version It is an algorithm that respects the 4 conditions of mutual exclusion, but only for two concurrent processes. Mutual exclusion conditions: Satisfied Satisfied Satisfied Satisfied No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) When proc1 wants to enter its critical section, it sets the flag to true. It then goes and checks the flag of proc2. If that is false, than it will enter the critical section straight away. Otherwise, proc1 consults turn to see whose turn is. If it finds the turn == proc2, then it will set the state of critical section as not being used by itself and will wait until the process proc2 will leave the critical section and will set the turn flag to proc1. No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) When proc1 wants to enter its critical section, it sets the flag to true. It then goes and checks the flag of proc2. If that is false, than it will enter the critical section straight away. Otherwise, proc1 consults turn to see whose turn is. If it finds the turn == proc2, then it will set the state of critical section as not being used by itself and will wait until the process proc2 will leave the critical section and will set the turn flag to proc1.

    19. Peterson’s Algorithm If proc2 is in its critical section, then p2use = TRUE and proc1 is blocked from entering its critical section. Peterson’s Algorithm can be easily generalized for n threads Mutual exclusion conditions: Satisfied Satisfied Satisfied Satisfied No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) Once proc1 has set its state flag to use the critical section, proc2 will not be able to enter the critical section. If proc2 is already in its critical section, than p2use is already TRUE and the proc1 will block in while condition. No two processes may be simultaneously inside their critical regions (have mutual exclusion) No assumption may be made about speeds and number of CPUs (independence of speed) No process outside critical section may block other processes (correct functionality is to be guaranteed if one of the processes is abnormal terminated) No process would have to wait forever to enter its critical section (the access in the critical section is guaranteed in a finite time) Once proc1 has set its state flag to use the critical section, proc2 will not be able to enter the critical section. If proc2 is already in its critical section, than p2use is already TRUE and the proc1 will block in while condition.

    20. Hardware Solutions - Disabling Interrupts Simplest solution is to have the process disable interrupts right after entering the critical region and enable them just before leaving it With interrupts disabled, no interrupts will occur; The processor is switched from process to process only as result of interrupt occurrence, so the processor will not be switched to other process It is unwise to give a user process the power to turn off interrupts Suppose it did it and never turn them on again … Suppose it is a multi-processor system …disabling interrupts on one of them, will not help too much It is a useful technique within the operating system itself, but it is not appropriate as general mutual exclusion mechanism for user processes

    21. Special Machine Instructions In a multiprocessor configuration, several processors share access to a common main memory; the processors behave independently There is no interrupt mechanism between processors on which mutual exclusion can be based At a hardware level, access to a memory location excludes any other access to same memory location; based on this, several machine level instructions to help mutual exclusion have been added. Most common ones: Test & Set Instruction Exchange Instruction Because those operations are performed in a single machine cycle, they are not subject to interference from other instructions

    22. Test & Set Instruction The shared variable bolt is initialized to 0; The only process that may enter its critical section is the one that finds the bolt variable equal to 0 All processes that attempt to enter the critical section go into a busy waiting mode When a process leaves the critical section, it resets bolt to 0; at this point only one from the waiting processes is granted access to the critical section

    23. Exchange Instruction The instruction exchanges the contents of a register with that of a memory location; during the execution of the instruction, access to that memory location is blocked for any other instruction The shared variable bolt is initialized to 0; each process holds a local variable key that is initialized to 1 The only process that enters the critical section is the one that finds the variable bolt equal to 0;

    24. Properties of Hardware Approach Advantages: Simple and easy to verify It is applicable to any number of processes on either a single processor or multiple processors sharing main memory It can be used to support multiple critical sections, each critical section defined by its own global variable Problems: Busy waiting is employed Starvation is possible – selection of a waiting process is arbitrary, thus some process could indefinitely be denied access Deadlock is possible – consider following scenario: P1 executes special instruction (test&exchange) and enters its critical section P1 is interrupted to give the processor to P2 (which has higher priority) P2 wants to use same resource as P1 so wants to enter critical section so, it will test the critical section variable and wait P1 will never be dispatched again because it is of lower priority than another ready process, P2

    25. Semaphores Because of the drawback of both the software and hardware solutions, we need to look into other solutions Dijkstra defined semaphores Two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specific place until it has received a specific signal For signaling, special variables, called semaphores are used

    26. Semaphores Special variable called a semaphore is used for signaling to transmit a signal, execute signal(s) Dijkstra used V for signal (increment - verhogen in Dutch) to receive a signal, execute wait(s) Dijkstra used P for wait (test - proberen in Dutch) If a process is waiting for a signal, it is suspended until that signal is sent Wait and Signal operations cannot be interrupted A queue is used to hold processes waiting on the semaphore

    27. Semaphores – Simplified view We can view the semaphore as a variable that has an integer value; three operations are defined upon this value: A semaphore may be initialized to a non-negative value The wait operation decrements the semaphore value. If the value becomes negative, then the process executing wait is blocked The signal operation increments the semaphore value. If the value is not positive, then a process blocked by a wait operation is unblocked

    28. Semaphores – Formal view A queue is used to hold processes waiting on a semaphore What order should the processes be removed Fairest policy is FIFO, a semaphore that implements this is called strong semaphore A semaphore that doesn’t specify the order in which processes are removed from the queue is known as week semaphore

    29. Binary Semaphores A more restrictive version of semaphores A binary semaphore may take on the values 0 or 1. In principle it should be easier to implement binary semaphores and they have expressive power as the general semaphore Both semaphores and binary semaphores do have a queue to hold the waiting processes

    30. Example of Semaphore Mechanism Processes A, B and C depend on the results from process D Initially, process D has produced an item of its resources and it is available (the value of the semaphore is s=1)

    31. (1) A is running, B, D and C are ready; When A issues a wait(s) it will immediately pass the semaphore and will be able to continue its execution, so it rejoins the ready queue (2) B runs and will execute a wait(s) primitive and it is suspended (allowing D to run) (3) D completes a new result and issues a signal(s)

    32. (4) signal(s) from (3) allows B to move in ready queue; D rejoins the ready queue and C is allowed to run (5) C is suspended when it issues a wait(s), similarly A and B are suspended on the semaphore (6) D takes processor again and produces one reslut again, calling signal(s); (7) C is removed from the semaphore queue and placed in ready list Latter cycles of D will release A and B from suspension

    33. Mutual Exclusion with Semaphores

    34. Producer/Consumer Problem Problem formulation: One or more producers generating some type of data (i.e. records, characters, etc…) and placing these in a buffer Consumer that are taking items out of the buffer, one at a time; only one agent (either producer or consumer can access the buffer at a time) The system is to be constrained to prevent the overlap of buffer operations Examine a number of solutions to this problem

    35. Infinite linear array buffer

    36. One producer, One consumer, Infinite buffer product is a semaphore that counts the number of objects placed in the buffer and not consumed yet Product is (in – out) (see the previous buffer) This solution works only if there is just one consumer, one producer and the intermediary buffer is infinite. If multiple producers/consumers than the append() and take() need to be serialized (as they increment/decrement buffer pointers)

    37. Multiple producers, Multiple consumers and infinite buffer buff_access_prod is a semaphore that synchronizes the access of producers to the buffer buff_access_cons is a sempahore that synchronizes the access of the consummers to the buffer product is a semaphore that counts the produced objects yet unconsumed Q. What is missing from this initialization code?? A. The buff_access_prod and buff_access_cons semaphores need to be initialized to 1Q. What is missing from this initialization code?? A. The buff_access_prod and buff_access_cons semaphores need to be initialized to 1

    38. Finite (circular) buffer

    39. Multiple producers, Multiple consumers, Finite buffer buff_space – semaphore that counts the free space from the buffer product – semaphore that counts the produced items and not consumed yet buff_access_prod is a semaphore that synchronizes the access of producers to the buffer buff_access_cons is a semaphore that synchronizes the access of the consumers to the buffer

    40. Semaphores implementation As mentioned earlier, it is imperative that the wait and signal operations to be atomic primitives The essence of the problem is mutual exclusion Only one process at a time may manipulate a semaphore with either a wait or signal operation Any of the software schemas would do (Peterson’s and Dekker’s algorithms) Large amount of processing overhead The alternative is to use hardware supported schemas for mutual exclusion Semaphores implemented with test&set instruction Semaphores implemented with disable/enable interrupts

    41. Semaphores implementation with test&set The semaphore s is a structure, as explained earlier, but now includes a new integer component, s.flag This involves a form of busy waiting, but the primitives wait and signal are short, so the amount of waiting time involved should be minor

    42. Semaphores implemented using interrupts For single processor system it is possible to inhibit the interrupts for the duration of a wait or signal operation The relative short duration of these operations means that this approach is reasonable

    43. References “Operating Systems”, William Stallings, ISBN 0-13-032986-X “Operating Systems – A modern perspective”, Garry Nutt, ISBN 0-8053-1295-1

More Related