1 / 90

CS 471 - Lecture 3 Process Synchronization & Deadlock

CS 471 - Lecture 3 Process Synchronization & Deadlock. Ch. 6 (6.1-6.7), 7 Fall 2009. Process Synchronization. Race Conditions The Critical Section Problem Synchronization Hardware Classical Problems of Synchronization Semaphores Monitors Deadlock. Concurrent Access to Shared Data.

Télécharger la présentation

CS 471 - Lecture 3 Process Synchronization & Deadlock

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 471 - Lecture 3Process Synchronization & Deadlock Ch. 6 (6.1-6.7), 7 Fall 2009

  2. Process Synchronization • Race Conditions • The Critical Section Problem • Synchronization Hardware • Classical Problems of Synchronization • Semaphores • Monitors • Deadlock GMU – CS 571

  3. Concurrent Access to Shared Data • Suppose that two processes A and B have access to a shared variable “Balance”. PROCESS A: PROCESS B: Balance = Balance - 100 Balance = Balance + 200 • Further, assume that Process A and Process B are executing concurrently in a time-shared, multi-programmed system. GMU – CS 571

  4. Concurrent Access to Shared Data • The statement “Balance = Balance – 100” is implemented by several machine level instructions such as: A1. lw $t0, BALANCE A2. sub $t0,$t0,100 A3. sw $t0, BALANCE • Similarly, “Balance = Balance + 200” can be implemented by the following: B1. lw $t1, BALANCE B2. add $t1,$t1,200 B3. sw $t1, BALANCE GMU – CS 571

  5. Race Conditions • Scenario 1: A1. lw $t0, BALANCE A2. sub $t0,$t0,100 A3. sw $t0, BALANCE Context Switch B1. lw $t1, BALANCE B2. add $t1,$t1,200 B3. sw $t1, BALANCE • Balance is increased by 100 • Observe: In a time-shared system the exact instruction executionorder cannot be predicted • Scenario 2: A1. lw $t0, BALANCE A2. sub $t0,$t0,100 Context Switch B1. lw $t1, BALANCE B2. add $t1,$t0,200 B3. sw $t1, BALANCE Context Switch A3. sw $t0, BALANCE • Balance is decreased by 100 GMU – CS 571

  6. Race Conditions • Situations where multiple processes are writing or reading some shared data and the final result depends on who runs precisely when are called race conditions. • A serious problem for most concurrent systems using shared variables! Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. • We must make sure that some high-level code sections are executed atomically. • Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflict-causing process. GMU – CS 571

  7. The Critical-Section Problem • n processes all competing to use some shared data • Each process has a code segment, called critical section (critical region), in which the shared data is accessed. • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in their critical section. • The execution of the critical sections by the processes must be mutually exclusivein time. GMU – CS 571

  8. Mutual Exclusion GMU – CS 571

  9. Solving Critical-Section Problem Any solution to the problem must satisfy three conditions. • Mutual Exclusion: No two processes may be simultaneously inside the same critical section. • Bounded Waiting: No process should have to wait forever to enter a critical section. • Progress: No process executing a code segment unrelated to a given critical section can block another process trying to enter the same critical section. • Arbitrary Speed: In addition,no assumption can be made about the relative speed of different processes (though all processes have a non-zero speed). GMU – CS 571

  10. Attempts to solve mutual exclusion do { …. entry section critical section exit section remainder section } while (1); • General structure as above • Two processes P1 and P2 • Processes may share common variables • Assume each statement is executed atomically (is this realistic?) GMU – CS 571

  11. Algorithm 1 • Shared variables: • int turn; initially turn = 0 • turn = i  Pi can enter its critical section • Process Pi do { while (turn != i) ; //it can be forever critical section turn = j; reminder section } while (1); • Satisfies mutual exclusion, but not progress busy waiting GMU – CS 571

  12. Algorithm 2 • Shared variables: • boolean flag[2]; initially flag[0] = flag[1] = false • flag[i] = true  Pi wants to enter its critical region • Process Pi do { flag[i]=true; while (flag[j]) ; //Pi & Pj both waiting critical section flag[i] = false; reminder section } while (1); • Satisfies mutual exclusion, but not progress GMU – CS 571

  13. Algorithm 3: Peterson’s solution • Combine shared variables of previous attempts • Process Pi do { flag[i]=true; turn = j; while (flag[j] and turn == j) ; critical section flag[i] = false; reminder section } while (1); • Meets our 3 requirements (assuming each individual statement is executed atomically) flag[j] = false or turn = i GMU – CS 571

  14. Synchronization Hardware • Many machines provide special hardware instructions that help to achieve mutual exclusion • The TestAndSet (TAS) instruction tests and modifies the content of a memory word atomically • TAS R1,LOCKreads the contents of the memory word LOCK into register R1, and stores a nonzero value (e.g. 1) at the memory word LOCK (atomically) • Assume LOCK = 0 • calling TAS R1, LOCK will set R1 to 0, and set LOCK to 1. • Assume LOCK = 1 • calling TAS R1, LOCK will set R1 to 1, and set LOCK to 1. GMU – CS 571

  15. Mutual Exclusion with Test-and-Set • Initially, shared memory word LOCK = 0. • Process Pi do { entry_section: TAS R1, LOCK CMP R1, #0 /* was LOCK = 0? */ JNE entry_section critical section MOVE LOCK, #0 /* exit section*/ remainder section } while(1); GMU – CS 571

  16. Classical Problems of Synchronization • Producer-Consumer Problem • Readers-Writers Problem • Dining-Philosophers Problem • We will develop solutions using semaphores and monitors as synchronization tools (no busy waiting). GMU – CS 571

  17. Producer/Consumer Problem Producer Consumer bounded size buffer (N) • Producer and Consumer execute concurrently but only one should be accessing the buffer at a time. • Producer puts items to the buffer area but should not be allowed to put items into a full buffer • Consumer process consumes items from the buffer but cannot remove information from an empty buffer GMU – CS 571

  18. Readers-Writers Problem • A data object (e.g. a file) is to be shared among several concurrent processes. • A writer process must have exclusive access to the data object. • Multiple reader processes may access the shared data simultaneously without a problem • Several variations on this general problem GMU – CS 571

  19. Dining-Philosophers Problem Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks. A philosopher may pick up only one chopstick at a time, and cannot pick up a chopstick already in use. When done, he puts down both of his chopsticks, one after the other. GMU – CS 571

  20. Synchronization • Synchronization • Most synchronization can be regarded as either: • Mutual exclusion (making sure that only one process is executing a CRITICAL SECTION [touching a variable or data structure, for example] at a time), or as • CONDITION SYNCHRONIZATION, which means making sure that a given process does not proceed until some condition holds (e.g. that a variable contains a given value) • The sample problems will illustrate this Competition Cooperation GMU – CS 571

  21. Semaphores • Language level synchronization construct introduced by E.W. Dijkstra (1965) • Motivation: Avoid busy waiting by blocking a process execution until some condition is satisfied. • Each semaphore has an integer value and a queue. • Two operations are defined on a semaphore variable s: wait(s) (also called P(s) or down(s)) signal(s) (also called V(s) or up(s)) • We will assume that these are the only user-visible operations on a semaphore. • Semaphores are typically available in thread implementations. GMU – CS 571

  22. Semaphore Operations • Conceptually a semaphore has an integer value greater than or equal to 0. • wait(s):: wait/block until s.value > 0; s.value-- ; /* Executed atomically! */ • A process executing the wait operation on a semaphore with value 0 is blocked (put on a queue) until the semaphore’s value becomes greater than 0. • No busy waiting • signal(s):: s.value++; /* Executed atomically! */ GMU – CS 571

  23. Semaphore Operations (cont.) • If multiple processes are blocked on the same semaphore s, only one of them will be awakened when another process performs signal(s) operation. Who will have priority? • Binary semaphores only have value 0 or 1. • Counting semaphores can have any non-negative value. [We will see some of these later] GMU – CS 571

  24. Critical Section Problem with Semaphores • Shared data: semaphore mutex; /* initially mutex = 1 */ • Process Pi: do { wait(mutex);critical section signal(mutex);remainder section} while (1); GMU – CS 571

  25. Re-visiting the “Simultaneous Balance Update” Problem • Shared data:int Balance; semaphore mutex; // initially mutex = 1 • Process A: ……. wait (mutex); Balance = Balance – 100; signal (mutex); …… • Process B: ……. wait (mutex); Balance = Balance + 200; signal (mutex); …… GMU – CS 571

  26. Semaphore as a General Synchronization Tool • Suppose we need to execute B in Pj only after A executed in Pi • Use semaphore flag initialized to 0 • Code: Pi : Pj :   Await(flag) signal(flag) B GMU – CS 571

  27. Returning to Producer/Consumer Producer Consumer bounded size buffer (N) • Producer and Consumer execute concurrently but only one should be accessing the buffer at a time. • Producer puts items to the buffer area but should not be allowed to put items into a full buffer • Consumer process consumes items from the buffer but cannot remove information from an empty buffer GMU – CS 571

  28. Producer-Consumer Problem (Cont.) • Make sure that: • The producer and the consumer do not access the buffer area and related variables at the same time. • No item is made available to the consumer if all the buffer slots are empty. • No slot in the buffer is made available to the producer if all the buffer slots are full. competition cooperation GMU – CS 571

  29. Producer-Consumer Problem • Shared datasemaphore full, empty, mutex;Initially:full = 0 /* The number of full buffers */ empty = n /* The number of empty buffers */ mutex = 1 /* Semaphore controlling the access to the buffer pool */ binary counting GMU – CS 571

  30. Producer Process do { … produce an item in p … wait(empty); wait(mutex); … add p to buffer … signal(mutex); signal(full); } while (1); GMU – CS 571

  31. Consumer Process do { wait(full); wait(mutex); … remove an item from buffer to c … signal(mutex); signal(empty); … consume the item in c … } while (1); GMU – CS 571

  32. Readers-Writers Problem • A data object (e.g. a file) is to be shared among several concurrent processes. • A writer process must have exclusive access to the data object. • Multiple reader processes may access the shared data simultaneously without a problem • Shared datasemaphore mutex, wrt; int readcount; Initiallymutex = 1, readcount = 0, wrt = 1; GMU – CS 571

  33. Readers-Writers Problem: Writer Process wait(wrt); … writing is performed … signal(wrt); GMU – CS 571

  34. Readers-Writers Problem: Reader Process wait(mutex); readcount++; if (readcount == 1) wait(wrt); signal(mutex); … reading is performed … wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex); Is this solution o.k.? GMU – CS 571

  35. Dining-Philosophers Problem • Shared data semaphore chopstick[5]; Initially all semaphore values are 1 Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks. A philosopher may pick up only one chopstick at a time, and cannot pick up a chopstick already in use. When done, he puts down both of his chopsticks, one after the other. GMU – CS 571

  36. Dining-Philosophers Problem • Philosopher i: do { wait(chopstick[i]); wait(chopstick[(i+1) % 5]); … eat … signal(chopstick[i]); signal(chopstick[(i+1) % 5]); … think … } while (1); Is this solution o.k.? GMU – CS 571

  37. Semaphores • It is generally assumed that semaphores are fair, in the sense that processes complete semaphore operations in the same order they start them • Problems with semaphores • They're pretty low-level. • When using them for mutual exclusion, for example (the most common usage), it's easy to forget a wait or a release, especially when they don't occur in strictly matched pairs. • Their use is scattered. • If you want to change how processes synchronize access to a data structure, you have to find all the places in the code where they touch that structure, which is difficult and error-prone. • Order Matters • What could happen if we switch the two ‘wait’ instructions in the consumer (or producer)? GMU – CS 571

  38. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. • Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S); M M signal(S); signal(Q); signal(Q) signal(S); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. GMU – CS 571

  39. High Level Synchronization Mechanisms • Several high-level mechanisms that are easier to use have been proposed. • Monitors • Critical Regions • Read/Write locks • We will study monitors (Java and Pthreads provide synchronization mechanisms based on monitors) • They were suggested by Dijkstra, developed more thoroughly by Brinch Hansen, and formalized nicely by Tony Hoare in the early 1970s • Several parallel programming languages have incorporated some version of monitors as their fundamental synchronization mechanism GMU – CS 571

  40. A monitor is a shared object with operations, internal state, and a number of condition queues. Only one operation of a given monitor may be active at a given point in time A process that calls a busy monitor is delayed until the monitor is free On behalf of its calling process, any operation may suspend itself by waiting on a condition An operation may also signal a condition, in which case one of the waiting processes is resumed, usually the one that waited first Monitors GMU – CS 571

  41. Condition (cooperation) Synchronization with Monitors: Access to the shared data in the monitor is limited by the implementation to a single process at a time; therefore, mutually exclusive access is inherent in the semantic definition of the monitor Multiple calls are queued Mutual Exclusion (competition) Synchronization with Monitors: delay takes a queue type parameter; it puts the process that calls it in the specified queue and removes its exclusive access rights to the monitor’s data structure continue takes a queue type parameter; it disconnects the caller from the monitor, thus freeing the monitor for use by another process. It also takes a process from the parameter queue (if the queue isn’t empty) and starts it Monitors GMU – CS 571

  42. Shared Data with Monitors Monitor SharedData { int balance; void updateBalance(int amount); int getBalance(); void init(int startValue) { balance = startValue; } void updateBalance(int amount){ balance += amount; } int getBalance(){ return balance; } } GMU – CS 571

  43. Monitors • To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; • Condition variable can only be used with the operations wait and signal. • The operation x.wait();means that the process invoking this operation is suspended until another process invokes x.signal(); • The x.signal operation resumes exactly one suspended process on condition variable x. If no process is suspended on condition variable x, then thesignal operation has no effect. • Wait and signal operations of the monitors are not the same as semaphore wait and signal operations! GMU – CS 571

  44. Monitor with Condition Variables • When a process P “signals” to wake up the process Q that was waiting on a condition, potentially both of them can be active. • However, monitor rules require that at most one process can be active within the monitor. Who will go first? • Signal-and-wait:P waits until Q leaves the monitor (or, until Q waits for another condition). • Signal-and-continue:Q waits until P leaves the monitor (or, until P waits for another condition). • Signal-and-leave: P has to leave the monitor after signaling (Concurrent Pascal) • The design decision is different for different programming languages GMU – CS 571

  45. Monitor with Condition Variables GMU – CS 571

  46. Producer-Consumer Problem with Monitors Monitor Producer-consumer { condition full, empty; int count; void insert(int item); //the following slide int remove(); //the following slide void init() { count = 0; } } GMU – CS 571

  47. Producer-Consumer Problem with Monitors (Cont.) void insert(int item){if (count == N) full.wait(); insert_item(item); // Add the new item count ++;if (count == 1) empty.signal(); } int remove(){ int m;if (count == 0) empty.wait();m = remove_item(); // Retrieve one itemcount --;if (count == N–1) full.signal(); return m; } GMU – CS 571

  48. Producer-Consumer Problem with Monitors (Cont.) void producer() { //Producer process while (1) { item = Produce_Item(); Producer-consumer.insert(item); } } void consumer() { //Consumer process while (1) { item = Producer-consumer.remove(item); consume_item(item); } } GMU – CS 571

  49. Monitors • Evaluation of monitors: • Strong support for mutual exclusion synchronization • Support for condition synchronization is very similar as with semaphores, so it has the same problems • Building a correct monitor requires that one think about the "monitor invariant“. The monitor invariant is a predicate that captures the notion "the state of the monitor is consistent." • It needs to be true initially, and at monitor exit • It also needs to be true at every wait statement • Can implement monitors in terms of semaphores  that semaphores can do anything monitors can. • The inverse is also true; it is trivial to build a semaphores from monitors GMU – CS 571

  50. Dining-Philosophers Problem with Monitors monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) // following slides void test(int i) // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } } Each philosopher will perform: dp. pickup (i); …eat.. dp. putdown(i); GMU – CS 571

More Related