1 / 83

Concurrency: Mutual Exclusion and Synchronization

Concurrency: Mutual Exclusion and Synchronization. Chapter 5. Problems with concurrent execution. Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources

cher
Télécharger la présentation

Concurrency: Mutual Exclusion and Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency: Mutual Exclusion and Synchronization Chapter 5 Chapter 5

  2. Problems with concurrent execution • Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources • If there is no controlled access to shared data, execution of the processes on these data can interleave. • The results will then depend on the order in which data were modified (nondeterminism). • A program may give different and sometimes undesirable results each time it is executed Chapter 5

  3. An example • Process P1 and P2 are running this same procedure and have access to the same variable “a” • shared variable • Processes can be interrupted anywhere • If P1 is first interrupted after user input and P2 executes entirely • Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; } Chapter 5

  4. Global view: possible interleaved execution Process P1 static char a; void echo() { cin >> a; cout << a; } Process P2 static char a; void echo() { cin >> a; cout << a; } CS CS CS: Critical Section: part of a program whose execution cannot interleave with the execution of other CSs Chapter 5

  5. Race conditions • Situations such as the preceding one, where processes are racing against each other for access to ressources (variables, etc.) and the result depends on the order of access, are called race conditions • In this example, there is race on the variable a • Non-determinism: results don`t depend exclusively on input data, they depend also on timing conditions Chapter 5

  6. Other examples • A counter that is updated by several processes, if the update requires several instructions (p.ex. in machine language) • Threads that work simultaneously on an array, one to update it, the other to extract stats • Processes that work simultaneouosly on a database, for example in order to reserve airplane seats • Two travellers could get the same seat... • In this chapter, we will talk normally about concurrent processes. The same considerations apply to concurrent threads. Chapter 5

  7. The critical section problem • When a process executes code that manipulates shared data (or resources), we say that the process is in a critical section (CS) (for that shared data or resource) • CSs can be thought of as sequences of instructions that are ‘tightly bound’ so no other CSs on the same data or resource can interleave. • The execution of CSs must be mutually exclusive: at any time, only one process should be allowed to execute in a CS for a given shared data or resource (even with multiple CPUs) • the results of the manipulation of the shared data or resource will no longer depend on unpredictable interleaving • The CS problem is the problem of providing special sequences of instructions to obtain such mutual exclusion • These are sometimes called synchronization mechanisms Chapter 5

  8. CS: Entry and exit sections • A process wanting to enter a critical section must ask for permission • The section of code implementing this request is called the entry section • The critical section (CS) will be followed by an exit section, • which opens the possibility of other processes entering their CS. • The remaining code is the remainder section repeat entry section critical section exit section remainder section forever Chapter 5

  9. Framework for analysis of solutions • Each process executes at nonzero speed but there are no assumption on the relative speed of processes, nor on their interleaving • Memory hardware prevents simultaneous access to the same memory location, even with several CPUs • this establishes a sort of elementary critical section, on which all other synch mechanisms are based Chapter 5

  10. Requirements for a valid solution to the critical section problem • Mutual Exclusion • At any time, at most one process can be in its critical section (CS) • Progress • Only processes that are not executing in their Remainder Section are taken in consideration in the decision of who will enter next in the CS. • This decision cannot be postponed indefinitely Chapter 5

  11. Requirements for a valid solution to the critical section problem (cont.) • Bounded Waiting • After a process P has made a request to enter its CS, there is a limit on the number of times that the other processes are allowed to enter their CS, before P gets it: no starvation • Of course also no deadlock • It is difficult to find solutions that satisfy all criteria and so most solutions we will see are defective on some aspects Chapter 5

  12. 3 Types of solutions • No special instructions (book:software appr.) • algorithms that don’t use instructions designed to solve this particular problem • Hardware solutions • rely on special machine instructions (e.g. Lock bit) • Operating System and Programming Language solutions (e.g. Java) • provide specific system calls to the programmer Chapter 5

  13. Software solutions • We consider first the case of 2 processes • Algorithm 1 and 2 have problems • Algorithm 3 is correct (Peterson’s algorithm) • Notation • We start with 2 processes: P0 and P1 • When presenting process Pi, Pj always denotes the other process (i != j) Chapter 5

  14. Algorithm 1 or the excessive courtesy Process P0: repeat flag[0]:=true; while(flag[1])do{}; CS flag[0]:=false; RS forever Process P1: repeat flag[1]:=true; while(flag[0])do{}; CS flag[1]:=false; RS forever Algorithm 2 global view • P0: flag[0]:=true • P1: flag[1]:=true • deadlock! Chapter 5

  15. Keep 1 Bool variable for each process: flag[0] and flag[1] Pi signals that it is ready to enter its CS by: flag[i]:=true but it gives first a chance to the other Mutual Exclusion, progress OK but not the deadlock requirement If we have the sequence: P0: flag[0]:=true P1: flag[1]:=true Both processes will wait forever to enter their CS: deadlock Could work in other cases! Algorithm 1 or the excessive courtesy Process Pi: repeat flag[i]:=true; while(flag[j])do{}; CS flag[i]:=false; RS forever Chapter 5

  16. Algorithm 2—Strict Order Process P0: repeat while(turn!=0)do{}; CS turn:=1; RS forever Process P1: repeat while(turn!=1)do{}; CS turn:=0; RS forever Algorithm 1 global view Note: turn is a shared variable between the two processes Chapter 5

  17. The shared variable turn is initialized (to 0 or 1) before executing any Pi Pi’s critical section is executed iff turn = i Pi is busy waiting if Pj is in CS: mutual exclusion is satisfied Progress requirement is not satisfied since it requires strict alternation of CSs. If a process requires its CS more often then the other, it can’t get it. Algorithm 2—Strict Order Process Pi: //i,j= 0 or 1 repeat while(turn!=i)do{}; CS turn:=j; RS forever do nothing Chapter 5

  18. Algorithm 3 (Peterson’s algorithm)(forget about Dekker) Process P0: repeat flag[0]:=true; // 0 wants in turn:= 1; // 0 gives a chance to 1 while (flag[1]&turn=1)do{}; CS flag[0]:=false; // 0 no longer wants in RS forever Process P1: repeat flag[1]:=true; // 1 wants in turn:=0; // 1 gives a chance to 0 while (flag[0]&turn=0)do{}; CS flag[1]:=false; // 1 no longer wants in RS forever Peterson’s algorithm global view Chapter 5

  19. Wait or enter? • A process i waits if: • the other process wants in and its turn has come • flag[j]and turn=j • A process i enters if: • It wants to enter or its turn has come • flag[i] or turn=i • in practice, if process i gets to the test, flag[i] is necessarily true so it`s turn that counts Chapter 5

  20. Initialization: flag[0]:=flag[1]:=false turn:= 0 or 1 Interest to enter CS specified by flag[i]:=true flag[i]:= false upon exit If both processes attempt to enter their CS simultaneously, only one turn value will last Algorithm 3 (Peterson’s algorithm)(forget about Dekker) Process Pi: repeat flag[i]:=true; // want in turn:=j; // but give a chance... while (flag[j]&turn=j)do{}; CS flag[i]:=false; // no longer want in RS forever Chapter 5

  21. Algorithm 3: proof of correctness • Mutual exclusion holds since: • P0 and P1 are both in CS only if flag[0] = flag[1] = true and only if turn = i for each Pi (impossible) • We now prove that the progress and bounded waiting requirements are satisfied: • Pi cannot enter CS only if stuck in while{} with condition flag[ j] = true and turn = j. • If Pj is not ready to enter CS then flag[ j] = false and Pi can then enter its CS Chapter 5

  22. Algorithm 3: proof of correctness (cont.) • If Pj has set flag[ j]=true and is in its while{}, then either turn=i or turn=j • If turn=i, then Pi enters CS. If turn=j then Pj enters CS but will then reset flag[ j]=false on exit: allowing Pi to enter CS • but if Pj has time to reset flag[ j]=true, it must also set turn=i • since Pi does not change value of turn while stuck in while{}, Pi will enter CS after at most one CS entry by Pj (bounded waiting) Chapter 5

  23. What about process failures? • If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS) • since failure in RS is just like having an infinitely long RS. • However, the solution given does not provide robustness against a process failing in its critical section (CS). • A process Pi that fails in its CS without releasing the CS causes a system failure • it would be necessary for a failing process to inform the others, perhaps difficult! Chapter 5

  24. Extensions to >2 processes • Peterson algorithm can be generalized to >2 processes • But in this case there is a more elegant solution... Chapter 5

  25. n-process solution: bakery algorithm (not in book) • Before entering their CS, each Pi receives a number. Holder of smallest number enter CS (like in bakeries, ice-cream stores...) • When Pi and Pj receive same number: • if i<j then Pi is served first, else Pj is served first • Pi resets its number to 0 in the exit section • Notation: • (a,b) < (c,d) if a < c or if a = c and b < d • max(a0,...ak) is a number b such that • b >= ai for i=0,..k Chapter 5

  26. The bakery algorithm (cont.) • Shared data: • choosing: array[0..n-1] of boolean; • initialized to false • number: array[0..n-1] of integer; • initialized to 0 • Correctness relies on the following fact: • If Pi is in CS and Pk has already chosen its number[k]!= 0, then (number[i],i) < (number[k],k) • but the proof is somewhat tricky... Chapter 5

  27. The bakery algorithm (cont.) Process Pi: repeat choosing[i]:=true; number[i]:=max(number[0]..number[n-1])+1; choosing[i]:=false; for j:=0 to n-1 do { while (choosing[j]) {}; while (number[j]!=0 and (number[j],j)<(number[i],i)){}; } CS number[i]:=0; RS forever Chapter 5

  28. Low Level Drawbacks of software solutions ^ • Complicated logic! • Processes that are requesting to enter in their critical section are busy waiting (consuming processor time needlessly) • If CSs are long, it would be more efficient to block processes that are waiting (just as if they had requested I/O). • We will now look at solutions that require hardware instructions • the first ones will not satisfy criteria above Chapter 5

  29. Observe that in a uniprocessor system the only way a program can interleave with another is if it gets interrupted So simply disable interruptions Mutual exclusion is preserved but efficiency of execution is degraded: while in CS, we cannot interleave execution with other processes that are in RS On a multiprocessor: mutual exclusion is not achieved Generally not an acceptable solution A simple hardware solution: interrupt disabling Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever Chapter 5

  30. Hardware solutions: special machine instructions • Normally, access to a memory location excludes other access to that same location • Extension: machine instructions that perform several actionsatomically (indivisible) on the same memory location (ex: reading and testing) • The execution of such instruction sequences is mutually exclusive (even with multiple CPUs) • They can be used to provide mutual exclusion but need more complex algorithms for satisfying the other requirements of the CS problem Chapter 5

  31. A C description of test-and-set: An algorithm that uses testset for Mutual Exclusion: Shared variable b is initialized to 0 Only the first Pi who sets b enter CS The test-and-set instruction bool testset(int& i) { if (i==0) { i=1; return true; } else { return false; } } Process Pi: repeat repeat{} until testset(b); CS b=0; RS forever Non Interruptible! Chapter 5

  32. The test-and-set instruction (cont.) • Mutual exclusion is assured: if Pi enter CS, the other Pj are busy waiting • but busy waiting is not a good way! • Needs other algorithms to satisfy the additional criteria • When Pi exit CS, the selection of the Pj who will enter CS is arbitrary: no bounded waiting. Hence starvation is possible • Still a bit too complicated to use in everyday code Chapter 5

  33. Exchange instruction: similar idea • Processors (ex: Pentium) often provide an atomic xchg(a,b) instruction that swaps the content of a and b. • But xchg(a,b) suffers from the same drawbacks as test-and-set Chapter 5

  34. Shared variable lock is initialized to 0 Each Pi has a local variable key The only Pi that can enter CS is the one who finds lock=0 This Pi excludes all the other Pj by setting lock to 1 Using xchg for mutual exclusion Process Pi: repeat key:=1 repeat exchg(key,lock) until key=0; CS lock:=0; RS forever Chapter 5

  35. Solutions based on systems calls • Appropriate machine language instructions are at the basis of all practical solutions • but they are too elementary • We need instructions allowing to better structure code. • We also need better facilities for preventing common errors, such as deadlocks, starvation, etc. • So there is need of instructions at a higher level • Such instructions are implemented as system calls Chapter 5

  36. Semaphores • A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: • wait(S) • signal(S) • It is shared by all processes who are interested in the same resource, shared variable, etc. • Semaphores will be presented in two steps: • busy wait semaphores • semaphores using waiting queues • A distinction is made between counter semaphores and binary semaphores • we`ll see the applications Chapter 5

  37. Busy Waiting Semaphores (spinlocks) • The simplest semaphores. • Useful when critical sections last for a short time, or we have lots of CPUs. • S initialized to positive value (to allow someone in at the beginning). wait(S): while S<=0 do{}; S--; waits if # of proc who can enter is 0 or negative signal(S): S++; increases by 1 the number of processes that can enter Chapter 5

  38. T S <= 0 F S - - atomic Atomicity aspects • The testing and decrementing sequence in wait are atomic, but not the loop. • Signal is atomic. • No two processes can be allowed to execute atomic sections simultaneously. • This can be implemented by some of the mechanisms discussed earlier Chapter 5

  39. T S <= 0 F S - - atomic Atomicity and Interruptibility S++ interruptible other process The loop is not atomic to allow another process to interrupt the wait Chapter 5

  40. Using semaphores for solving critical section problems • For n processes • Initialize S to 1 • Then only 1 process is allowed into CS (mutual exclusion) • To allow k processes into CS, we initialize S to k • So semaphores can allow several processes in the CS! Process Pi: repeat wait(S); CS signal(S); RS forever Chapter 5

  41. Initialize S to 1 Process P0: repeat wait(S); CS signal(S); RS forever Process P1: repeat wait(S); CS signal(S); RS forever Semaphores: the global view Chapter 5

  42. We have 2 processes: P1 and P2 Statement S1 in P1 needs to be performed before statement S2 in P2 Then define a semaphore “synch” Initialize synch to 0 Proper synchronization is achieved by having in P1: S1; signal(synch); And having in P2: wait(synch); S2; Using semaphores to synchronize processes Chapter 5

  43. wait(S): while S<=0 do{}; S--; Semaphores: observations • When S>=0: • the number of processes that can execute wait(S) without being blocked = S • S processes can enter CS • if S>1 is possible, then a second semaphore is necessary to implement mutual exclusion • see producer/consumer example • When S >1, the process that enters in the CS is the one that tests S first (random selection). • this won’t be true in the following solution • When S<0: the number of processes waiting on S is |S| Chapter 5

  44. Avoiding Busy Wait in Semaphores • To avoid busy waiting: when a process has to wait for a semaphore to become greater than 0, it will be put in a blocked queue of processes waiting for this to happen. • Queues can be FIFO, or priority, etc.: OS has control on the order processes enter CS. • wait and signal become system calls to the OS (just like I/O calls) • There is a queue for every semaphore just as there is a queue for each I/O unit • A process that waits on a semaphore is in waiting state Chapter 5

  45. A semaphore can be seen as a record (structure): When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue The signal operation removes (by a fair scheduling policy like FIFO) one process from the queue and puts it in the list of ready processes Semaphores without busy waiting type semaphore = record count: integer; queue: list of process end; var S: semaphore; Chapter 5

  46. Semaphore’s operations (atomic) wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue } atomic S was negative: queue nonempty signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list } atomic The value to which S.count is initialized depends on the application Chapter 5

  47. Figure showing the relationship between queue content and value of S Chapter 5

  48. Semaphores: Implementation • wait and signal themselves contain critical sections! How to implement them? • Note that they are very short critical sections. • Solutions: • uniprocessor: disable interrupts during these operations (ie: for a very short period). This does not work on a multiprocessor machine. • multiprocessor: use some busy waiting scheme, hardware or software. Busy waiting shouldn’t last long. Chapter 5

  49. The producer/consumer problem • A producer process produces information that is consumed by a consumer process • Ex1: a print program produces characters that are consumed by a printer • Ex2: an assembler produces object modules that are consumed by a loader • We need a buffer to hold items that are produced and eventually consumed • A common paradigm for cooperating processes (see Unix Pipes) Chapter 5

  50. P/C: unbounded buffer • We assume first an unbounded buffer consisting of a linear array of elements • in points to the next item to be produced • out points to the next item to be consumed Chapter 5

More Related