1 / 81

Synchronization

Synchronization. ..or: the trickiest bit of this course. Announcements. Homework 1 grades & solutions are now in CMS Hardcopies can be had in the Homework Handback Room (Upson 360, 10-12am, 2-4pm) Regrading request, policies, etc. Oh dear lord… (out of 75!!!). Threads share global memory.

smithkirby
Télécharger la présentation

Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Synchronization ..or: the trickiest bit of this course.

  2. Announcements • Homework 1 grades & solutions are now in CMS • Hardcopies can be had in the Homework Handback Room (Upson 360, 10-12am, 2-4pm) • Regrading request, policies, etc..

  3. Oh dear lord… (out of 75!!!)

  4. Threads share global memory • When a process contains multiple threads, they have • Private registers and stack memory (the context switching mechanism needs to save and restore registers when switching from thread to thread) • Shared access to the remainder of the process “state”

  5. Two threads, one counter Popular web server • Uses multiple threads to speed things up. • Simple shared state error: • each thread increments a shared counter to track number of hits • What happens when two threads execute concurrently? … hits = hits + 1; … some slides taken from Mendel Rosenblum's lecture at Stanford

  6. T2 Shared counters • Possible result: lost update! • One other possible result: everything works.  Difficult to debug • Called a “race condition” hits = 0 T1 time read hits (0) read hits (0) hits = 0 + 1 hits = 0 + 1 hits = 1

  7. Race conditions • Def: a timing dependent error involving shared state • Whether it happens depends on how threads scheduled • In effect, once thread A starts doing something, it needs to “race” to finish it because if thread B looks at the shared memory region before A is done, A’s change will be lost. • Hard to detect: • All possible schedules have to be safe • Number of possible schedule permutations is huge • Some bad schedules? Some that will work sometimes? • they are intermittent • Timing dependent = small changes can hide bug

  8. Scheduler assumptions Process a: while(i < 10) i = i +1; print “A won!”; Process b: while(i > -10) i = i - 1; print “B won!”; If i is shared, and initialized to 0 • Who wins? • Is it guaranteed that someone wins? • What if both threads run on identical speed CPU • executing in parallel

  9. Scheduler Assumptions • Normally we assume that • A scheduler always gives every executable thread opportunities to run • In effect, each thread makes finite progress • But schedulers aren’t always fair • Some threads may get more chances than others • To reason about worst case behavior we sometimes think of the scheduler as an adversary trying to “mess up” the algorithm

  10. T2 T2 Critical Section Goals • Threads do some stuff but eventually mighttry to access shared data T1 time • CSEnter(); • Critical section • CSExit(); • CSEnter(); • Critical section • CSExit(); T1

  11. T2 T2 Critical Section Goals • Perhaps they loop (perhaps not!) T1 • CSEnter(); • Critical section • CSExit(); • CSEnter(); • Critical section • CSExit(); T1

  12. Critical Section Goals • We would like • Safety: No more than one thread can be in a critical section at any time. • Liveness: A thread that is seeking to enter the critical section will eventually succeed • Fairness: If two threads are both trying to enter a critical section, they have equal chances of success • … in practice, fairness is rarely guaranteed

  13. CSEnter() { while(inside) continue; inside = true; } A first idea: Have a boolean flag, inside. Initially false. Solving the problem CSExit() { inside = false; } Code is unsafe: thread 0 could finish the while test when inside is false, but then 1 might call CSEnter() before 0 can set inside to true! • Now ask: • Is this Safe? Live? Fair?

  14. CSEnter(int i) { inside[i] = true; while(inside[J]) continue; } A different idea (assumes just two threads): Have a boolean flag, inside[i]. Initially false. Solving the problem: Take 2 CSExit(int i) { Inside[i] = false; } Code isn’t live: with bad luck, both threads could be looping, with 0 looking at 1, and 1 looking at 0 • Now ask: • Is this Safe? Live? Fair?

  15. CSEnter(int i) { while(turn != i) continue; } How about introducing a turn variable? Solving the problem: Take 3 CSExit(int i) { turn = J; } Code isn’t live: thread 1 can’t enter unless thread 0 did first, and vice-versa. But perhaps one thread needs to enter many times and the other fewer times, or not at all • Now ask: • Is this Safe? Live? Fair?

  16. Dekker’s Algorithm (1965) CSEnter(int i) { inside[i] = true; while(inside[J]) { if (turn == J) { inside[i] = false; while(turn == J) continue; inside[i] = true; } }} CSExit(int i) { turn = J; inside[i] = false; }

  17. Napkin analysis of Dekker’s algorithm: • Safety: No process will enter its CS without setting its inside flag. Every process checks the other process inside flag after setting its own. If both are set, the turn variable is used to allow only one process to proceed. • Liveness: The turn variable is only considered when both processes are using, or trying to use, the resource • Fairness: The turn variable ensures alternate access to the resource when both are competing for access

  18. Peterson’s Algorithm (1981) CSEnter(int i) { inside[i] = true; turn = J; while(inside[J] && turn == J) continue; } • Simple is good!! CSExit(int i) { inside[i] = false; }

  19. Napkin analysis of Peterson’s algorithm: • Safety (by contradiction): • Assume that both processes (Alan and Shay) are in their critical section (and thus have their inside flags set). Since only one, say Alan, can have the turn, the other (Shay) must have reached the while() test before Alan set his inside flag. • However, after setting his inside flag, Alan gave away the turn to Shay. Shay has already changed the turn and cannot change it again, contradicting our assumption. Liveness & Fairness => the turn variable.

  20. Can we generalize to many threads? • Obvious approach won’t work: • Issue: Who’s turn next? CSEnter(int i) { inside[i] = true; for(J = 0; J < N; J++) while(inside[J] && turn == J) continue; } CSExit(int i) { inside[i] = false; }

  21. Bakery “concept” • Think of a popular store with a crowded counter, perhaps the pastry shop in Montreal’s fancy market • People take a ticket from a machine • If nobody is waiting, tickets don’t matter • When several people are waiting, ticket order determines order in which they can make purchases

  22. Bakery Algorithm: “Take 1” • int ticket[n]; • int next_ticket; CSEnter(int i) { ticket[i] = ++next_ticket; for(J = 0; J < N; J++) while(ticket[J] && ticket[J] < ticket[i]) continue; } CSExit(int i) { ticket[i] = 0; } • Oops… access to next_ticket is a problem!

  23. Bakery Algorithm: “Take 2” • int ticket[n]; CSEnter(int i) { ticket[i] = max(ticket[0], … ticket[N-1])+1; for(J = 0; J < N; J++) while(ticket[J] && ticket[j] < ticket[i]) continue; } CSExit(int i) { ticket[i] = 0; } Just add 1 to the max! • Oops… two could pick the same value!

  24. Bakery Algorithm: “Take 3” If i, j pick same ticket value, id’s break tie: (ticket[J] < ticket[i]) || (ticket[J]==ticket[i] && J<i) Notation: (B,J) < (A,i) to simplify the code: (B<A || (B==A && J<i)), e.g.: (ticket[J],J) < (ticket[i],i)

  25. Bakery Algorithm: “Take 4” • int ticket[N]; CSExit(int i) { ticket[i] = 0; } CSEnter(int i) { ticket[i] = max(ticket[0], … ticket[N-1])+1; for(J = 0; J < N; J++) while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } • Oops… i could look at J when J is still storing its ticket, and J could have a lower id than me.

  26. Bakery Algorithm: Almost final • int ticket[N]; • boolean choosing[N] = false; CSExit(int i) { ticket[i] = 0; } CSEnter(int i) { choosing[i] = true; ticket[i] = max(ticket[0], … ticket[N-1])+1; choosing[i] = false; for(J = 0; J < N; J++) { while(choosing[J]) continue; while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } }

  27. Bakery Algorithm: Issues? • What if we don’t know how many threads might be running? • The algorithm depends on having an agreed upon value for N • Somehow would need a way to adjust N when a thread is created or one goes away • Also, technically speaking, ticket can overflow! • Solution: Change code so that if ticket is “too big”, set it back to zero and try again.

  28. Bakery Algorithm: Final • int ticket[N]; /* Important: Disable thread scheduling when changing N */ • boolean choosing[N] = false; CSExit(int i) { ticket[i] = 0; } CSEnter(int i) { do { ticket[i] = 0; choosing[i] = true; ticket[i] = max(ticket[0], … ticket[N-1])+1; choosing[i] = false; } while(ticket[i] >= MAXIMUM); for(J = 0; J < N; J++) { while(choosing[J]) continue; while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } }

  29. How do real systems do it? • Some real systems actually use algorithms such as the bakery algorithm • A good choice where busy-waiting isn’t going to be super-inefficient • For example, if you have enough CPUs so each thread has a CPU of its own • Some systems disable interrupts briefly when calling CSEnter and CSExit • Some use hardware “help”: atomic instructions

  30. Process i While(test_and_set(&lock)); Critical Section lock = false; Critical Sections with Atomic Hardware Primitives Share: int lock; Initialize: lock = false; Assumes that test_and_set is compiled to a special hardware instruction that sets the lock and returns the OLD value (true: locked; false: unlocked) Problem: Does not satisfy liveness (bounded waiting) (see book for correct solution)

  31. Presenting critical sections to users • CSEnter and CSExit are possibilities • But more commonly, operating systems have offered a kind of locking primitive • We call these semaphores

  32. Semaphores • Non-negative integer with atomic increment and decrement • Integer ‘S’ that (besides init) can only be modified by: • P(S) or S.wait(): decrement or block if already 0 • V(S) or S.signal(): increment and wake up process if any • These operations are atomic These systems use the operation signal() instead of V() Some systems use the operation wait() instead of P() semaphore S; P(S) { while(S ≤ 0) ; S--; } V(S) { S++; }

  33. Semaphores • Non-negative integer with atomic increment and decrement • Integer ‘S’ that (besides init) can only be modified by: • P(S) or S.wait(): decrement or block if already 0 • V(S) or S.signal(): increment and wake up process if any • Can also be expressed in terms of queues: semaphore S; P(S) { if (S ≤ 0) { stop thread, enqueue on wait list, run something else; } S--; } V(S) { S++; if(wait-list isn’t empty) { dequeue and start one process }}

  34. Summary: Implementing Semaphores • Can use • Multithread synchronization algorithms shown earlier • Could have a thread disable interrupts, put itself on a “wait queue”, then context switch to some other thread (an “idle thread” if needed) • The O/S designer makes these decisions and the end user shouldn’t need to know

  35. Semaphore Types • Counting Semaphores: • Any integer • Used for synchronization • Binary Semaphores • Value is limited to 0 or 1 • Used for mutual exclusion (mutex) Process i P(S); Critical Section V(S); Shared: semaphore S Init: S = 1;

  36. Classical Synchronization Problems

  37. Paradigms for Threads to Share Data • We’ve looked at critical sections • Really, a form of locking • When one thread will access shared data, first it gets a kind of lock • This prevents other threads from accessing that data until the first one has finished • We saw that semaphores make it easy to implement critical sections

  38. Reminder: Critical Section • Classic notation due to Dijkstra: Semaphore mutex = 1; CSEnter() { P(mutex); } CSExit() { V(mutex); } • Other notation (more familiar in Java): CSEnter() { mutex.wait(); } CSExit() { mutex.signal(); }

  39. Bounded Buffer • This style of shared access doesn’t capture two very common models of sharing that we would also like to support • Bounded buffer: • Arises when two or more threads communicate with some threads “producing” data that others “consume”. • Example: preprocessor for a compiler “produces” a preprocessed source file that the parser of the compiler “consumes”

  40. Readers and Writers • In this model, threads share data that some threads “read” and other threads “write”. • Instead of CSEnter and CSExit we want • StartRead…EndRead; StartWrite…EndWrite • Goal: allow multiple concurrent readers but only a single writer at a time, and if a writer is active, readers wait for it to finish

  41. Producer-Consumer Problem • Start by imagining an unbounded (infinite) buffer • Producer process writes data to buffer • Writes to In and moves rightwards • Consumer process reads data from buffer • Reads from Out and moves rightwards • Should not try to consume if there is no data Out In Need an infinite buffer

  42. Producer-Consumer Problem • Bounded buffer: size ‘N’ • Access entry 0… N-1, then “wrap around” to 0 again • Producer process writes data to buffer • Must not write more than ‘N’ items more than consumer “ate” • Consumer process reads data from buffer • Should not try to consume if there is no data 0 1 N-1 In Out

  43. Producer-Consumer Problem • A number of applications: • Data from bar-code reader consumed by device driver • Data in a file you want to print consumed by printer spooler, which produces data consumed by line printer device driver • Web server produces data consumed by client’s web browser • Example: so-called “pipe” ( | ) in Unix > cat file | sort | uniq | more > prog | sort • Thought questions: where’s the bounded buffer? • How “big” should the buffer be, in an ideal world?

  44. Producer-Consumer Problem • Solving with semaphores • We’ll use two kinds of semaphores • We’ll use counters to track how much data is in the buffer • One counter counts as we add data and stops the producer if there are N objects in the buffer • A second counter counts as we remove data and stops a consumer if there are 0 in the buffer • Idea: since general semaphores can count for us, we don’t need a separate counter variable • Why do we need a second kind of semaphore? • We’ll also need a mutex semaphore

  45. Producer-Consumer Problem Shared: Semaphores mutex, empty, full; Init: mutex = 1; /* for mutual exclusion*/ empty = N; /* number empty buf entries */ full = 0; /* number full buf entries */ Producer do { . . . // produce an item in nextp . . . P(empty); P(mutex); . . . // add nextp to buffer . . . V(mutex); V(full); } while (true); Consumer do { P(full); P(mutex); . . . // remove item to nextc . . . V(mutex); V(empty); . . . // consume item in nextc . . . } while (true);

  46. Readers-Writers Problem • Courtois et al 1971 • Models access to a database • A reader is a thread that needs to look at the database but won’t change it. • A writer is a thread that modifies the database • Example: making an airline reservation • When you browse to look at flight schedules the web site is acting as a reader on your behalf • When you reserve a seat, the web site has to write into the database to make the reservation

  47. Readers-Writers Problem • Many threads share an object in memory • Some write to it, some only read it • Only one writer can be active at a time • Any number of readers can be active simultaneously • Key insight: generalizes the critical section concept • One issue we need to settle, to clarify problem statement. • Suppose that a writer is active and a mixture of readers and writers now shows up. Who should get in next? • Or suppose that a writer is waiting and an endless of stream of readers keeps showing up. Is it fair for them to become active? • We’ll favor a kind of back-and-forth form of fairness: • Once a reader is waiting, readers will get in next. • If a writer is waiting, one writer will get in next.

  48. Readers-Writers (Take 1) Shared variables: Semaphore mutex, wrl; integer rcount; Init: mutex = 1, wrl = 1, rcount = 0; Writer do { P(wrl); . . . /*writing is performed*/ . . . V(wrl); }while(TRUE); Reader do { P(mutex); rcount++; if (rcount == 1) P(wrl); V(mutex); . . . /*reading is performed*/ . . . P(mutex); rcount--; if (rcount == 0) V(wrl); V(mutex); }while(TRUE);

  49. Readers-Writers Notes • If there is a writer • First reader blocks on wrl • Other readers block on mutex • Once a reader is active, all readers get to go through • Which reader gets in first? • The last reader to exit signals a writer • If no writer, then readers can continue • If readers and writers waiting on wrl, and writer exits • Who gets to go in first? • Why doesn’t a writer need to use mutex?

  50. Does this work as we hoped? • If readers are active, no writer can enter • The writers wait doing a P(wrl) • While writer is active, nobody can enter • Any other reader or writer will wait • But back-and-forth switching is buggy: • Any number of readers can enter in a row • Readers can “starve” writers • With semaphores, building a solution that has the desired back-and-forth behavior is really, really tricky! • We recommend that you try, but not too hard…

More Related