160 likes | 298 Vues
Avoiding L iveness hazards. Indiscriminate use of locking Lock-ordering deadlocks Incorrect use of thread pools Resource deadlocks Tradeoff between safety and liveness Example Dining philosopher’s problem Database transactions recover by aborting transactions
E N D
Avoiding Liveness hazards • Indiscriminate use of locking • Lock-ordering deadlocks • Incorrect use of thread pools • Resource deadlocks • Tradeoff between safety and liveness • Example • Dining philosopher’s problem • Database transactions recover by aborting transactions • Deadlocks are non-deterministic
Dynamic lock-order deadlocks • How to fix such deadlocks?
Resource deadlocks • Task requires two resources • Tasks from two threads acquire the two resources in opposite order • Resource deadlocks
Avoiding/diagnosing deadlocks • Not acquire more than one lock • Not practical • Lock-ordering protocol for locks acquired together • Use open calls • synchronized (this) { o.foo(); } -- ? • Timed lock attempts • Back off and try again • random backoff
Starvation • Perpetually denied acess to resources • E.g, CPU cycles • Inappropriate use of thread priorities • Non terminating constructs • Infinite loops • Resource waits • Poor responsiveness • GUI applications • Use background threads for processing
Livelock • Thread not blocked • Does not make progress however • Operation fails repeatedly • E.g., processing a message type • Introduce randomness to avoid livelocks
Performance and Scalability • Threads for improved performance • Overheads • Coordinating between threads • Context switching • Thread creation and teardown • Scheduling overhead • Greater throughput than overheads – beneficial • To achieve better performance • Utilize processing resources effectively • Enable program to exploit addl. processing resources
Scalability versus Performance • Performance • “how fast” • Same work with less effort • E.g., reuse cache results • Scalability • “how much” • More work with more resources • Increases amount of work to be done • Divide into tasks • Consolidate tasks
Amdahl’s law • Speedup <= 1/(F + (1 – F)/N) • F: fraction of work done serially
Context switching • Manipulate shared data structures in the OS • Less CPU available for program • Data required not in local processor cache • Cache misses initially • Threads run slowly • Minimum time for each thread to run • Amortizes cost of context switching • Reduces responsiveness • More blocking = more context switches • vmstat on Unix reports number of context switches
Memory synchronization • Visibility guarantees of synchronization causes • Memory barriers • Flush/invalidate caches • Flush hardware write buffers • Inhibit compiler optimizations • Uncontended vs Contended synchronization • Uncontended • Perform optimizations • E.g., lock object accessible by only one thread
Example • Lock coarsening
Blocking • Spin waiting • Suspending blocked thread • Unnecessary context switches
Reducing lock contention • Reduce duration of locks hold • Reduce frequency of requests • Replace exclusive locks with coordination mechanisms
Reducing lock contention • Narrow lock scope • Reduce lock granularity • Lock splitting • Different locks for different variables • Lock striping • Extension of split to variable sized objects • Avoid hot fields • E.g., addQuery, addUser • Alternative to exclusive locks • ReadWriteLocks • Avoid object pooling • Coordination to the pool structure