1 / 61

Deadlocks: Part I Prevention and Avoidance

Deadlocks: Part I Prevention and Avoidance. Review: Motivation for Monitors and Condition Variables. Semaphores are a huge step up, but: They are confusing because they are dual purpose: Both mutual exclusion and scheduling constraints

Antony
Télécharger la présentation

Deadlocks: Part I Prevention and Avoidance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deadlocks: Part IPrevention and Avoidance

  2. Review: Motivation for Monitors and Condition Variables • Semaphores are a huge step up, but: • They are confusing because they are dual purpose: • Both mutual exclusion and scheduling constraints • Example: the fact that flipping of P’s in bounded buffer gives deadlock is not immediately obvious • Cleaner idea: Use locks for mutual exclusion and condition variablesfor scheduling constraints • Definition: Monitor: a lock and zero or more condition variables for managing concurrent access to shared data • Use of Monitors is a programming paradigm • Some languages like Java provide monitors in the language • The lock provides mutual exclusion to shared data: • Always acquire before accessing shared data structure • Always release after finishing with shared data • Lock initially free

  3. Review: Condition Variables • How do we change the get() routine to wait until something is in buffer? • Could do this by keeping a count of the number of things on the queue (with semaphores), but error prone • Condition Variable: a queue of threads waiting for something inside a critical section • Key idea: allow sleeping inside critical section by atomically releasing lock at time we go to sleep • Contrast to semaphores: Can’t wait inside critical section • Operations: • Wait(): Atomically release lock and go to sleep. Re-acquire lock later, before returning. • Signal(): Wake up one waiter, if any • Broadcast(): Wake up all waiters • Rule: Must hold lock when doing condition variable ops!

  4. Review: Producer Consumer using Monitors Monitor Producer_Consumer { any_t buf[N]; int n = 0, tail = 0, head = 0; condition not_empty, not_full; void put(char ch) { while(n == N) wait(not_full); buf[head%N] = ch; head++; n++; signal(not_empty); } char get() { while(n == 0) wait(not_empty); ch = buf[tail%N]; tail++; n--; signal(not_full); return ch; }

  5. Reminders: Subtle aspects • Notice that when a thread calls wait(), if it blocks it also automatically releases the monitor’s mutual exclusion lock • This is an elegant solution to an issue seen with semaphores • Caller has mutual exclusion lock and wants to call P(not_empty)… but this call might block • If we just do the call, the solution deadlocks… • But if we first call V(mutex), we get a race condition!

  6. Review: Mesa vs. Hoare monitors • Need to be careful about precise definition of signal and wait. Consider a piece of our dequeue code: while (n==0) { wait(not_empty); // If nothing, sleep } ch = buf[tail%N]; // Get next item • Why didn’t we do this? if (n==0) { wait(not_empty); // If nothing, sleep } ch = buf[tail%N]; // Get next item • Answer: depends on the type of scheduling • Hoare-style (most textbooks): • Signaler gives lock, CPU to waiter; waiter runs immediately • Waiter gives up lock, processor back to signaler when it exits critical section or if it waits again • Mesa-style (Java, most real operating systems): • Signaler keeps lock and processor • Waiter placed on ready queue with no special priority • Practically, need to check condition again after wait

  7. Review: Can we construct Monitors from Semaphores? • Locking aspect is easy: Just use a mutex • Can we implement condition variables this way? Wait() { P(x_sem); } Signal() { V(x_sem); } • Doesn’t work: Wait() may sleep with lock held • Does this work better? Wait() { V(mutex); // Release mutex lock P(x_sem); P(mutex); // Acquire mutex lock}Signal() { V(x_sem); } • No: Condition vars have no history, semaphores have history: • What if thread signals and no one is waiting? NO-OP • What if thread later waits? Thread Waits • What if thread V’s and noone is waiting? Increment • What if thread later does P? Decrement and continue

  8. Review: Construction of Monitors from Semaphores (con’t) • Problem with previous try: • P and V are commutative – result is the same no matter what order they occur • Condition variables are NOT commutative • Does this fix the problem? Wait(Lock lock) { V(mutex); // Release mutex lock P(x_sem); P(mutex); // Acquire mutex lock}Signal() { if semaphore queue is not empty V(x_sem);} • Not legal to look at contents of semaphore queue • There is a race condition – signaler can slip in after lock release and before waiter executes semaphore.P() • It is actually possible to do this correctly • Complex solution for Hoare scheduling in book (and next slide) • Can you come up with simpler Mesa-scheduled solution?

  9. Review: Construction of Mesa Monitors using Semaphores Wait(){ x_count++; V(mutex); P(x_sem); P(mutex); x_count--; } For each procedure F: P(mutex); /* body of F */ V(mutex); Signal(){ If(x_count > 0) { V(x_sem); }

  10. Review: Construction of Hoare Monitors using Semaphores Wait(){ x_count++; if(next_count > 0) V(next); else V(mutex); P(x_sem); x_count--; } For each procedure F: P(mutex); /* body of F */ if(next_count > 0) V(next); else V(mutex); Signal(){ If(x_count > 0) { next_count++; V(x_sem); P(next); next_count--; }

  11. Dining Philosophers and the Deadlock Concept

  12. Dining Philosopher’s • Dijkstra • A problem that was invented to illustrate a different aspect of communication • Our focus here is on the notion of sharing resources that only one user at a time can own • Philosophers eat/think • Eating needs two forks • Pick one fork at a time Idea is to capture the concept of multiple processescompeting for limited resources

  13. Rules of the Game • The philosophers are very logical • They want to settle on a shared policy that all can apply concurrently • They are hungry: the policy should let everyone eat (eventually) • They are utterly dedicated to the proposition of equality: the policy should be totally fair

  14. What can go wrong? • Starvation • A policy that can leave some philosopher hungry in some situation (even one where the others collaborate) • Deadlock • A policy that leaves all the philosophers “stuck”, so that nobody can do anything at all • Livelock • A policy that makes them all do something endlessly without ever eating!

  15. Thread A Wait For Owned By Res 2 Res 1 Owned By Wait For Thread B Starvation vs Deadlock • Starvation vs. Deadlock • Starvation: thread waits indefinitely • Example, low-priority thread waiting for resources constantly in use by high-priority threads • Deadlock: circular waiting for resources • Thread A owns Res 1 and is waiting for Res 2Thread B owns Res 2 and is waiting for Res 1 • Deadlock  Starvation but not vice versa • Starvation can end (but doesn’t have to) • Deadlock can’t end without external intervention

  16. A flawed conceptual solution # define N 5 Philosopher i (0, 1, .. 4) do { think(); take_fork(i); take_fork((i+1)%N); eat(); /* yummy */ put_fork(i); put_fork((i+1)%N); } while (true);

  17. Coding our flawed solution? Shared: semaphore fork[5]; Init: fork[i] = 1 for all i=0 .. 4 Philosopher i do { P(fork[i]); P(fork[i+1]); /* eat */ V(fork[i]); V(fork[i+1]); /* think */ } while(true); Oops! Subject to deadlock if they all pick up their “right” fork simultaneously!

  18. Dining Philosophers Solutions • Allow only 4 philosophers to sit simultaneously • Asymmetric solution • Odd philosopher picks left fork followed by right • Even philosopher does vice versa • Pass a token • Allow philosopher to pick fork only if both available

  19. One Possible Solution • Introduce state variable • enum {thinking, hungry, eating} • Philosopher i can set the variable state[i] only if neighbors not eating • (state[(i+4)%5] != eating) and (state[(i+1)%5]!= eating) • Also, need to declare semaphore self, where philosopher i can delay itself.

  20. One possible solution Shared: int state[5], semaphore s[5], semaphore mutex; Init: mutex = 1; s[i] = 0 for all i=0 .. 4 take_fork(i) { P(mutex); state[i] = hungry; test(i); V(mutex); P(s[i]); } put_fork(i) { P(mutex); state[i] = thinking; test((i+1)%N); test((i-1+N)%N); V(mutex); } Philosopher i do { take_fork(i); /* eat */ put_fork(i); /* think */ } while(true); test(i) { if(state[i] == hungry && state[(i+1)%N] != eating && state[(i-1+N)%N != eating) { state[i] = eating; V(s[i]); }

  21. Goals for Today • Discussion of Deadlocks • Conditions for its occurrence • Solutions for preventing and avoiding deadlock

  22. System Model • There are non-shared computer resources • Maybe more than one instance • Printers, Semaphores, Tape drives, CPU • Processes need access to these resources • Acquire resource • If resource is available, access is granted • If not available, the process is blocked • Use resource • Release resource • Undesirable scenario: • Process A acquires resource 1, and is waiting for resource 2 • Process B acquires resource 2, and is waiting for resource 1  Deadlock!

  23. For example: Semaphores semaphore: mutex1 = 1 /* protects resource 1 */ mutex2 = 1 /* protects resource 2 */ Process B code: { /* initial compute */ P(mutex2) P(mutex1) /* use both resources */ V(mutex2) V(mutex1) } Process A code: { /* initial compute */ P(mutex1) P(mutex2) /* use both resources */ V(mutex2) V(mutex1) }

  24. Deadlocks • Definition: Deadlock exists among a set of processes if • Every process is waiting for an event • This event can be caused only by another process in the set • Event is the acquire of release of another resource One-lane bridge

  25. Four Conditions for Deadlock • Coffman et. al. 1971 • Necessary conditions for deadlock to exist: • Mutual Exclusion • At least one resource must be held is in non-sharable mode • Hold and wait • There exists a process holding a resource, and waiting for another • No preemption • Resources cannot be preempted • Circular wait • There exists a set of processes {P1, P2, … PN}, such that • P1 is waiting for P2, P2 for P3, …. and PN for P1 All four conditions must hold for deadlock to occur

  26. Real World Deadlocks? • Truck A has to waitfor truck B tomove • Notdeadlocked

  27. Real World Deadlocks? • Gridlock

  28. Real World Deadlocks? • Gridlock

  29. Testing for deadlock • Steps • Collect “process state” and use it to build a graph • Ask each process “are you waiting for anything”? • Put an edge in the graph if so • We need to do this in a single instant of time, not while things might be changing • Now need a way to test for cycles in our graph

  30. Testing for deadlock • How do cars do it? • Never block an intersection • Must back up if you find yourself doing so • Why does this work? • “Breaks” a wait-for relationship • Illustrates a sense in which intransigent waiting (refusing to release a resource) is one key element of true deadlock!

  31. Testing for deadlock • One way to find cycles • Look for a node with no outgoing edges • Erase this node, and also erase any edges coming into it • Idea: This was a process people might have been waiting for, but it wasn’t waiting for anything else • If (and only if) the graph has no cycles, we’ll eventually be able to erase the whole graph! • This is called a graph reduction algorithm

  32. Graph reduction example 0 3 4 This graph can be “fully reduced”, hence there was no deadlock at the time the graph was drawn. Obviously, things could change later! 7 8 2 11 1 5 9 10 12 6

  33. This is an example of an “irreducible” graph It contains a cycle and represents a deadlock, although only some processes are in the cycle Graph reduction example

  34. What about “resource” waits? • When dining philosophers wait for one-another, they don’t do so directly • Erasmus doesn’t “wait” for Ptolemy • Instead, they wait for resources • Erasmus waits for a fork… which Ptolemy exclusively holds • Can we extend our graphs to represent resource wait?

  35. Resource-wait graphs • We’ll use two kinds of nodes • A process: P3 will be represented as circle: • A resource: R7 will be represented as rectangle: • A resource often has multiple identicalunits, such as “blocks of memory” • Represent these as circles in the box • Arrow from a process to a resource: “I want k units of this resource.” Arrow to a process:this process holds k units of the resource • P3 wants 2 units of R7 3 2 7

  36. A tricky choice… • When should resources be treated as “different classes”? • To be in the same class, resources do need to be equivalent • “memory pages” are different from “forks” • But for some purposes, we might want to split memory pages into two groups • The main group of forks. The extra forks • Keep this in mind when we talk about avoiding deadlock. • It proves useful in doing “ordered resource allocation”

  37. Resource-wait graphs 1 2 3 4 2 1 1 1 2 5 1 4

  38. Reduction rules? • Find a process that can have all its current requests satisfied (e.g. the “available amount” of any resource it wants is at least enough to satisfy the request) • Erase that process (in effect: grant the request, let it run, and eventually it will release the resource) • Continue until we either erase the graph or have an irreducible component. In the latter case we’ve identified a deadlock

  39. This graph is reducible: The system is not deadlocked 1 2 3 4 2 1 1 1 2 1 1 4

  40. 1 2 3 1 4 2 1 1 2 5 1 4 This graph is not reducible: The system is deadlocked

  41. Comments • It isn’t common for systems to actually implement this kind of test • However, we’ll later use a version of the resource reduction graph as part of an algorithm called the “Banker’s Algorithm” • Idea is to schedule the granting of resources so as to avoid potentially deadlock states

  42. Some questions you might ask • Does the order in which we do the reduction matter? • Answer: No. The reason is that if a node is a candidate for reduction at step i, and we don’t pick it, it remains a candidate for reduction at step i+1 • Thus eventually, no matter what order we do it in, we’ll reduce by every node where reduction is feasible

  43. Some questions you might ask • If a system is deadlocked, could this go away? • No, unless someone kills one of the threads or something causes a process to release a resource • Many real systems put time limits on “waiting” precisely for this reason. When a process gets a timeout exception, it gives up waiting and this also can eliminate the deadlock • But that process may be forced to terminate itself because often, if a process can’t get what it needs, there are no other options available!

  44. Some questions you might ask • Suppose a system isn’t deadlocked at time T. • Can we assume it will still be free of deadlock at time T+1? • No, because the very next thing it might do is to run some process that will request a resource… … establishing a cyclic wait … and causing deadlock

  45. Dealing with Deadlocks • Reactive Approaches: • Periodically check for evidence of deadlock • For example, using a graph reduction algorithm • Then need a way to recover • Could blue screen and reboot the computer • Could pick a “victim” and terminate that thread • But this is only possible in certain kinds of applications • Basically, thread needs a way to clean up if it gets terminated and has to exit in a hurry! • Often thread would then “retry” from scratch • Despite drawbacks, database systems do this

  46. Dealing with Deadlocks • Proactive Approaches: • Deadlock Prevention • Prevent one of the 4 necessary conditions from arising • …. This will prevent deadlock from occurring • Deadlock Avoidance • Carefully allocate resources based on future knowledge • Deadlocks are prevented • Ignore the problem • Pretend deadlocks will never occur • Ostrich approach… but surprisingly common!

  47. Deadlock Prevention

  48. Deadlock Prevention • Can the OS prevent deadlocks? • Prevention: Negate one of necessary conditions • Mutual exclusion: • Make resources sharable • Not always possible (spooling?) • Hold and wait • Do not hold resources when waiting for another  Request all resources before beginning execution • Processes do not know what all they will need • Starvation (if waiting on many popular resources) • Low utilization (Need resource only for a bit) • Alternative: Release all resources before requesting anything new • Still has the last two problems

  49. Deadlock Prevention • Prevention: Negate one of necessary conditions • No preemption: • Make resources preemptable (2 approaches) • Preempt requesting processes’ resources if all not available • Preempt resources of waiting processes to satisfy request • Good when easy to save and restore state of resource • CPU registers, memory virtualization • Bad if in middle of critical section and resource is a lock • Circular wait: (2 approaches) • Single lock for entire system? (Problems) • Impose partial ordering on resources, request them in order

  50. 1 2 4 3 1 Breaking Circular Wait • Order resources (lock1, lock2, …) • Acquire resources in strictly increasing/decreasing order • When requests to multiple resources of same order: • Make the request a single operation • Intuition: Cycle requires an edge from low to high, and from high to low numbered node, or to same node • Ordering not always possible, low resource utilization 1 2

More Related