310 likes | 327 Vues
Learn about deadlock prevention, lock hierarchies, monitors in synchronization, and strategies for thread scheduling in operating systems.
E N D
Deadlock • 4 necessary conditions • Mutual exclusion • Hold and wait • No preemption • Circular wait
Figure 6.7 Deadlock example. Threads T1 and T2 hold locks L1 and L2, respectively, and each thread attempts to acquire the other lock, which cannot be granted.
Preventing Deadlock • Two major categories for dealing with deadlock • Prevent deadlock from happening • Detect deadlocks and then break the deadlock
Lock Hierarchies – preventing deadlock Each lock is assigned a position in an ordering. Must acquire the locks in order Assumes a priori knowledge of all the locks needed. If need a lower lock after acquiring a higher lock, could release the higher locks, acquire the lower locks and then reacquire the higher locks.
Monitors Abstract concept that could be implemented in a language. Only allows one thread access to the data. Has one single entry point for several methods
Figure 6.8 Monitors provide an abstraction of synchronization in which only one thread can access the monitor’s data at any time. Other threads are blocked either waiting to enter the monitor or waiting on events inside the monitor.
Figure 6.12 Multiple readers, single writer support routines.
Figure 6.13 Multiple readers, single-writer support routines based on a single-conditionvariable, but subject to spurious wake-ups.
Thread Scheduling • Two ways to have a thread • Map multiple threads to a single OS aware thread • Unbound threads • Create and destroy overhead low • If OS blocks this thread, all the mapped threads are blocked also • Map each thread to its own OS aware thread • Bound threads • Better performance expected
Code Spec 6.18 POSIX Thread routine for setting thread scheduling attributes.
Figure 6.14 A 2D relaxation replaces—on each iteration—all interior values by the average of their four nearest neighbors.
Figure 6.15 False Sharing – when a processor has more from the shared memory in its cache than is “allocated” to that processor
Figure 6.16 2D Sucessive over-relaxation program written using POSIX Threads.
Figure 6.16 2D Sucessive over-relaxation program written using POSIX Threads. (cont.)
Figure 6.16 2D Sucessive over-relaxation program written using POSIX Threads. (cont.)
Figure 6.16 2D Sucessive over-relaxation program written using POSIX Threads. (cont.) 0 1
Figure 6.17 It’s often profitable to do useful work while waiting for some long-latency operation to complete.
Figure 6.18 A split-phase barrier allows a thread to do useful work while waiting for the other threads to arrive at the barrier.
Figure 6.19 A 1D over-relaxation replaces—on each iteration—all interior values by the average of their two nearest neighbors.
Figure 6.20 Program for 1D successive over-relaxation using a single-phase barrier. +1
Figure 6.21 Program for 1D successive over-relaxation using a split-phase barrier. +1
Figure 6.21 Program for 1D successive over-relaxation using a split-phase barrier. (cont.)
Figure 6.28 OpenMP schedule (static)
Code Spec 6.22 OpenMP reduce operation. The <op> choice comes from the accompanying table.