1 / 72

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Chapter 6: Process Synchronization. Background → atomic operation The Critical-Section Problem Peterson ’ s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors. Background.

rod
Télécharger la présentation

Chapter 6: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6: Process Synchronization

  2. Chapter 6: Process Synchronization • Background → atomic operation • The Critical-Section Problem • Peterson’s Solution • Synchronization Hardware • Semaphores • Classic Problems of Synchronization • Monitors

  3. Background • Concurrent access to shared data may result in data inconsistency • Maintaining dataconsistency requires mechanisms to ensure the orderly execution of cooperating processes • Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. • We can do so by having an integer counter that keeps track of the number of items in the buffer. • Initially, counter is set to 0. • It is incremented by the producer after it produces a new item. • It is decremented by the consumer after it consumes a item.

  4. Shared data among producer and consumer #define BUFFER_SIZE 10 typedef struct { int content; } item; item buffer[BUFFER_SIZE]; int in = 0; // initial state int out = 0; // empty intcounter = 0;

  5. Producer while (TRUE) { // produce an item and put in nextProduced while (counter == BUFFER_SIZE) // is buffer full? ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter ++; }

  6. Consumer while (TRUE) { while (counter == 0) // is buffer empty? ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter --; // consume the item in nextConsumed }

  7. Race Condition • Although the producer and consumer routines are correct separately, they may not function correctly when executed concurrently. • Suppose • that the value of the variable counter is currently 5 and • that the producer and consumer execute the statements “counter++” and “counter--” concurrently. • Following the execution of these two statements, the value of the variable counter may be 4, 5, or 6. • Why?

  8. Race Condition • count++ could be implemented asregister1 = counter register1 = register1 + 1 counter = register1 • count-- could be implemented asregister2 = counter register2 = register2 - 1 counter = register2 • Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = counter {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute counter = register1 {counter = 6 } S5: consumer execute counter = register2 {counter = 4} counter = 4

  9. “counter++” is not ATOMIC E - Box S - Box 2. operand 1. data 3. execution 4. result Ex. --------------------------- E-box S-box --------------------------- CPU Memory Computer Disk ---------------------------- Separation of Execution Box & Storage Box

  10. Race Conditioncounter++, counter-- are not ATOMIC Producer E - Box Consumer E - Box S - Box counter-- counter++ counter

  11. Example of a Race Condition CPU1 CPU2 P1 P2 Memory X == 2 X = X – 1; X = X + 1; Load X, R1 Inc R1 Store X, R1 Load X, R2 Dec R2 Store X, R2 Interleaved execution?

  12. Race Condition • We would arrive at this incorrect state • because we allowed both processes to manipulate the variable counter concurrently. • Race Condition • Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place • To prevent the race condition • We need to ensure that only one process at a time can manipulate the variable counter • We require that the processes be synchronized in some way

  13. Critical-Section (CS) Problem • A system consisting of n processes {P0, P1, …, Pn-1} • Each process has a segment of code, called a critical section, where the process may change common variables, updating a table, writing a file, and so forth. • The system requires that no two processes are executing in their critical section at the same time. • Solution to the critical-section problem • To design a protocol that the processes can use to cooperate. • Each process must request permission to enter its CS. • The section of code implementing this request is the entry section. • The CS may be followed by an exit section • The remaining code is the remainder section

  14. Critical-Section Problem • The general structure of a typical process Pi Do { entry section critical section exit section remainder section } while ( TRUE );

  15. Solution to Critical-Section Problem • A solution to the critical-section problem must satisfy the following three requirements: 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely – deadlock-free condition 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted – starvation-free condition • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes

  16. Peterson’s Solution for critical section problem • Two process {Pi, Pj} solution • Software-based solution • Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. • The two processes share two variables: • int turn; • Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section. • The flag array is used to indicate if a process is ready to enter the critical section.flag[i] = true implies that processPiis ready!

  17. Algorithm for Process Pi do { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE; REMAINDER SECTION } while (TRUE); // entry section // exit section

  18. Peterson’s Solution satisfies 3 conditions • Mutual Exclusion • Each Pi enters its critical section only if either flag[j]==false or turn==i. • Each Pi enters its critical section with flag[i]==true • Both Pi and Pj cannot enter their critical section at the same time. • Progress • Pi can be stuck only if either flag[j]==true and turn==j. • Bounded Waiting • Pi will enter the critical section after at most one entry by Pj .

  19. What is the problem in the Pi? do { while (turn != i); CRITICAL SECTION turn = j; REMAINDER SECTION } while (TRUE); satisfies Mutual Exclusion does not satisfy Progress Bounded Waiting do { flag[i] = true; while (flag[j]); CRITICAL SECTION flag[i] = false; REMAINDER SECTION } while (TRUE); satisfies Mutual Exclusion does not satisfy Progress Bounded Waiting

  20. Dekker’s Algorithm – two-process solution do { flag[i] = TRUE; while (flag[j] ) { if ( turn == j ) { flag[i] = FALSE; while ( turn == j) ; flag[i] = TRUE; } } CRITICAL SECTION turn = j; flag[i] = FALSE; REMAINDER SECTION } while (TRUE);

  21. Synchronization Hardware • Many systems provide hardware support for critical section code • Uni-processors– could disable interrupts • Currently running code would execute without preemption • Generally too inefficient on multiprocessor systems • Operating systems using this are not broadly scalable • Modern machines provide special atomic hardware instructions • Atomic = non-interruptible • Test memory word and set value: TestAndSet() • Swap contents of two memory words: Swap()

  22. TestAndSet() Instruction boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } • Definition: • This Instruction is atomic • This Instruction is provided by hardware.

  23. Solution using TestAndSet() • Shared Boolean variablelock., initialized to false. • Solution for Mutual Exclusion : do { while ( TestAndSet (&lock ) ) ; // do nothing CRITICAL SECTION lock = FALSE; REMAINDER SECTION } while ( TRUE); busy waiting

  24. Swap() Instruction • Definition: • This Instruction is atomic • This Instruction is provided by hardware. void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }

  25. Solution using Swap() • Shared Boolean variablelock initialized to FALSE; • Each process has a local Boolean variablekey. • Solution for Mutual Exclusion: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); CRITICAL SECTION lock = FALSE; REMAINDER SECTION } while ( TRUE); busy waiting

  26. Solution using TestAndSet() • Shared Boolean variablewaiting[n] and lock initialized to FALSE; • Each process has a local Boolean variablekey. • Solution for Bounded Waiting: do { waiting [i] = TRUE; key = TRUE; while ( waiting [i] && key ) key = TestAndSet (&lock); waiting [i] = FALSE; CRITICAL SECTION j = (i+1)%n; while ( (j != i) && !waiting[j] ) j=(j+1)%n; if( j == i) lock = FALSE; else waiting [i] = FALSE; REMAINDER SECTION } while ( TRUE); busy waiting

  27. Semaphore • Semaphore Alphabet

  28. Semaphore • The various hardware-based solutions to the critical section problem • TestAndSet(), Swap() • complicated for application programmers to use • To overcome this difficulty • Semaphore may be used. • Semaphore is … • Synchronization tool that does not require busy waiting • Semaphore S–integer variable • Two standard operations to modify S: wait() and signal() • Originally called P()forwait()andV()forsignal() • The modification of the semaphore is atomic. • less complicated to use.

  29. Semaphore • Definition for wait(S); • Definition for signal(S); • All the modification to the semaphore is atomic. • When one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value. wait (S) { while ( S <= 0 ) ; // no-op S--; } busy waiting signal (S) { S++; }

  30. Usage of Semaphore • Counting semaphore • Integer value can range over an unrestricted domain • Ex. 0 .. 10 • Binary semaphore • Integer value can range only between 0 and 1 • Which is refer to as mutex locks • Binary semaphore for solving the critical-section problem for multiple processes • n processes share a semaphore, mutex. • mutex is initialized to 1 • The structure of a processPi do { wait (mutex); CRITICAL SECTION signal (mutex); REMAINDER SECTION } while (TRUE);

  31. 1. binary semaphore

  32. 1. binary semaphore sem_wait() is same to wait(); sem_post() is same to signal();

  33. 1. binary semaphore • Does previous code satisfy three requirements of CS problem. • Mutual exclusion • Progress • Bounded waiting • The third requirement is not guaranteed as a default. • It usually depends on the implementation of wait() function. • Ex, Linux sem_wait() does not guarantee the bounded waiting req.

  34. 1. binary semaphore

  35. 1. binary semaphore

  36. Usage of Semaphore • Counting semaphore used to control access to a given resource consisting of a finite number of instances • Semaphore is initialized to the number of resources available • To use a resource, a process performs wait() • To release a resource, a process perform signal() • When the semaphore is 0, all resources are being used. • Counting semaphore to solve various synchronization problems. • Ex. Two concurrently running processes P1, P2 • P1 with a statement S1, P2 with a statement S2 • S2 is executed only after S1 has completed • How to solve this using semaphore?

  37. Usage of Semaphore • Solution of #3 in previous page • Initialization • P1 structure • P2 structure Semaphore synch = 0; S1; signal (synch); wait (synch); S2;

  38. 3. counting semaphore

  39. 3. counting semaphore

  40. Semaphore Implementation with Busy waiting • must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time • Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. • Previous codes have busy waiting in critical section implementation • While a process in critical section, others must loop continuously in wait code. • called a spinlock • Disadvantage • waists CPU cycle during wait(); • Sometimes it is useful: • No context switch involved in the wait(), signal() • Implementation code is short • Little busy waiting if critical section rarely occupied • However, applications may spend lots of time in critical sections and therefore this is not a good solution.

  41. Semaphore Implementation with noBusy waiting • To overcome the busy waiting problem, two operations are used • Block()– place the process invoking the operation on the appropriate waiting queue. • Wakeup()– remove one of processes in the waiting queue and place it in the ready queue. • When a semaphore value is not positive on executing wait(), the process blocks itself instead of busy waiting. • signal() operation wakes up a waiting process. • To implement semaphores under this definition, we define a semaphore as a record like • With each semaphore there is an associated waiting queue. typedef struct { int value; // semaphore value struct process *list; // pointer to the PCB list of waiting processes }

  42. value:-3 PCB PCB list PCB Semaphore Implementation with noBusy waiting • How to implement the waiting queue of a semaphore? • linked list in the semaphore • contains PCBs of waiting processes. • The negative value meansthe number of waiting processes • The positive value meansthe number of available resources • The list can use any queuing strategy. • To satisfy thebounded waiting requirementsthe queue can be implemented with FIFO queue. • Two pointer variables which indicate head and tails of the PCB list.

  43. Semaphore Implementation with noBusy waiting • Implementation of wait() • Implementation of signal() wait ( semaphore *S) { S->value --; if ( S->value < 0 ) { add this process to S->list; // put the PCB into waiting queue block(); // go to waiting state from running state } } signal ( semaphore *S) { S->value ++; if ( S->value <= 0 ) { remove a process P from S->list; // select a process from waiting queue wakeup(); // put the process into ready queue } }

  44. Deadlock and Starvation • Deadlock– two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • The use of a semaphore with a waiting queue may result in a deadlock. • Let S and Q be two semaphores initialized to 1 • Starvation–indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. • The implementation of a semaphore with a waiting queue may result in a starvation. • When the queue is in LIFO (last-in, first-out) order. P0 wait (S); wait (Q); … … signal (S); signal (Q); P1 wait (Q); wait (S); … … signal (Q); signal (S);

  45. Classical Problems of Synchronization • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem

  46. Bounded-Buffer Problem • More than two producers • produce one item, store it in the buffer, and continue. • More than two consumers • consume a item from buffer, and continue. • The buffer contains at most N items. • Solution with semaphore • Semaphore mutex initialized to the value 1 • Semaphore full initialized to the value 0 • Semaphore empty initialized to the value N.

  47. Bounded Buffer Problem • The structure of the producer process do { // produce an item wait (empty); // check buffer is full or not wait (mutex);// enter critical section // add the item to the buffer // critical section signal (mutex);// exit critical section signal (full);// one item is produced } while (TRUE);

  48. Bounded Buffer Problem • The structure of the consumer process do { wait (full); // check buffer is empty or not wait (mutex);// enter critical section // add the item to the buffer // critical section signal (mutex);// exit critical section signal (empty);// one item is produced // consume the item } while (TRUE);

  49. Readers-Writers Problem • A data set is shared among a number of concurrent processes • Readers– only read the data set; they do not perform any updates • Writers– can both read and write. • Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. • Shared Data • Data set • Semaphore mutex initialized to 1. • Semaphore wrt initialized to 1. • Integer readcount initialized to 0.

  50. Readers-Writers Problem • The structure of a writer process do { wait (wrt);// enter a critical section // writing is performed // critical section signal (wrt);// exit critical section } while (TRUE);

More Related