1 / 73

ECE8833 Polymorphous and Many-Core Computer Architecture

ECE8833 Polymorphous and Many-Core Computer Architecture. Lecture 7 Lock Elision and Transactional Memory. Prof. Hsien-Hsin S. Lee School of Electrical and Computer Engineering. Speculative Lock Elision (SLE) & Speculative Synchronization. Lock May Not Be Needed.

sonja
Télécharger la présentation

ECE8833 Polymorphous and Many-Core Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE8833 Polymorphous and Many-Core Computer Architecture Lecture 7 Lock Elision and Transactional Memory Prof. Hsien-Hsin S. Lee School of Electrical and Computer Engineering

  2. Speculative Lock Elision (SLE) &Speculative Synchronization

  3. Lock May Not Be Needed • OoO won’t speculate beyond lock acquisition, critical section (CS) executions are serialized • In the example, if condition failed, no shared data is updated • Potential thread-level parallelism is lost Thread 2 Thread 1 LOCK(queue); if (!search_queue(input)) enqueue(input); UNLOCK(queue); How to detect such hidden parallelism?

  4. Bottom Line • Appearance of Instantaneous Changes (i.e., Atomicity) • Lock can be elided if • Data read in CS is not modified by other threads • Data write in CS is not read by other threads • Any violation of above will not commit the instructions in CS

  5. Speculative Lock Elision (SLE)[Rajwar & Goodman, MICRO-34] Hardware-based scheme (no ISA extension) • Dynamically identifies synchronization operations • Predicts them being unnecessary • Elides them • When speculation is wrong, recover using existing cache coherence mechanism

  6. Silent store pair stl_c (store on lock flag) : perform a “lock acquire” (lock=1) stl (regular store): perform a “lock release” (lock = 0) Why silent? “Release” will undo the write performed by “acquire” Goal Elide these silent store pair Speculate all memory operations inside critical sections will occur atomically Buffer store results during execution within the critical section SLE Scenario

  7. Predict a Lock Acquire • A lock-predictor detects ldl_l/stl_c pairs • View “elided lock acquire” as making a “branch prediction” • Buffer register and memory state until SLE is validated • View “elided lock release” as a “branch outcome resolution”

  8. Speculation During Critical Section • Speculative register state, use either of below • ROB • Critical section needs to be smaller than ROB • Instructions cannot speculatively retire • Register checkpoint • Done once after elided lock acquire • Allow speculative retirement for registers • Speculative memory state • Use write-buffer • Multiple writes can be collapsed inside the write-buffer • Write-buffer cannot be flushed prior to elided lock release • Rollback the states when mis-speculated

  9. Mis-speculation Triggers • Atomicity violation • Use existing coherence protocol, the following two basic principle • Any external invalidation to an accessed line • Any external request to access an “exclusive” line • Use LSQ if ROB approach is used (in-flight CS instructions cannot retire and will be checked via snooping) • Add an access bit to cache if Checkpoint is used • Violation due to limited resources • Write-buffer is filled before elided lock release • ROB is filled before elided lock release • Uncached access events

  10. Microbenchmark Result [Rajwar & Goodman, MICRO-34]

  11. Percentage of Dynamic Locks Elided [Rajwar & Goodman, MICRO-34]

  12. Speculative Synchronization [Martinez et al. ASPLOS-02] • Similar rationale • Synchronization may be too conservative • bypass synchronization • Off-load synchronization operations from processor to an Spec.Sync.U (SSU) • Use a “speculative thread” to pass • Active barriers • Busy locks • Unset flags • TLS (Thread-level speculation) hardware • Disambiguate data violation • Roll back • Always keep at least one “safe” thread to • Guarantee forward progress • In case the speculative buffer is overflowed • Actual conflict occurs

  13. Safe Speculative Lock Example A B C D E ACQUIRE RELEASE Speculative Slide Source: Jose Martinez

  14. Safe Speculative Lock Example B C D E ACQUIRE A RELEASE Speculative Slide Source: Jose Martinez

  15. Safe Speculative Lock Example C D ACQUIRE A B E RELEASE Speculative Slide Source: Jose Martinez

  16. Safe Speculative Lock Example D ACQUIRE A B C RELEASE E Speculative Slide Source: Jose Martinez

  17. Safe Speculative Lock Example D ACQUIRE B C RELEASE A E C becomes the new “safe” thread and “lock owner” Speculative Slide Source: Jose Martinez

  18. Hardware Support for Speculative Synchronization Indicating speculative memory operations Processor Processor Tag Keep synchronization variable under speculation A Logic L1 R Upon hitting a “lock acquire” instruction, a library call is invoked to issue a request to the SSU, and processor moves on to pass lock for speculative execution Set Acquire and Release bits and take over the job of “acquiring lock” L2 Speculative bit per cache line

  19. Speculative Lock Request • Processor Side • Program SSU for speculative lock • Checkpoint register file • Speculative Synchronization Unit (SSU) Side • Initiate Test&Test&Set loop on lock variable • Use caches as speculative buffer (like TLS) • Set “Speculative bit” in lines accessed speculatively Slide Source: Jose Martinez

  20. Lock Acquire • SSU acquires lock (i.e., T&S successful) • Clears all speculative bits • Becomes idle • Release (store) later by processor

  21. Release While Speculative • Processor issues release, SSU still trying to acquire the lock • SSU intercepts release (store) by processor • SSU toggles Release bit — thread “already done” • SSU can pretend that ownership has been acquired and released (although it never happened) • Acquire and Release bit are cleared • All speculative bits in caches are cleared

  22. Violation Detection • Rely on underlying cache coherence protocol • A thread receiving an external invalidation • An external read for a local dirty cache line • If the accessed line is *not* marked speculative  normal coherence protocol applied • If a *speculative thread* receives an external message for a line marked speculative • SSU squashes the local thread • All dirty lines w/ speculative bits are gang-invalidated • All speculative bits are cleared • Processor restores check-pointed states • Lock owner was never squashed (since none of its cache line would be marked as speculative)

  23. Speculative Synchronization Result • Average sync time reduction: 40% • Execution time reduction up to 15%, average 7.5%

  24. Transaction Memory

  25. Thread 0 move(a, b, key1); Thread 1 move(b, a, key2); Current Parallel Programming Model • Shared data consistency • Use “Lock” • Fine grained lock • Error prone • Deadlock prone • Overhead • Coarse grained lock • Sequentialize threads • Prevent parallelism // WITH LOCKS void move(T s, T d, Obj key){ LOCK(s); LOCK(d); tmp = s.remove(key); d.insert(key, tmp); UNLOCK(d); UNLOCK(s); } DEADLOCK!(& can’t abort) Code example source: Mark Hill @Wisconsin

  26. Parallel Software Problems • Parallel systems are often programmed with • Synchronization through barriers • Shared objects access control through locks • Lock granularity and organization must balance performance and correctness • Coarse-grain locking: Lock contention • Fine-grain locking: Extra overhead • Must be careful to avoid deadlocks or data races • Must be careful not to leave anything unprotected for correctness • Performance tuning is not intuitive • Performance bottlenecks are related to low level events • E.g. false sharing, coherence misses • Feedback is often indirect (cache lines, rather than variables)

  27. Parallel Hardware Complexity (TCC’s view) • Cache coherence protocols are complex • Must track ownership of cache lines • Difficult to implement and verify all corner cases • Consistency protocols are complex • Must provide rules to correctly order individual loads/stores • Difficult for both hardware and software • Current protocols rely on low latency, not bandwidth • Critical short control messages on ownership transfers • Latency of short messages unlikely to scale well in the future • Bandwidth is likely to scale much better • High speed interchip connections • Multicore (CMP) = on-chip bandwidth

  28. What do we want? • A shared memory system with • A simple, easy programming model (unlike message passing) • A simple, low-complexity hardware implementation (unlike shared memory) • Good performance

  29. Lock Freedom • Why lock is bad? • Common problems in conventional locking mechanisms in concurrent systems • Priority inversion: When low-priority process is preempted while holding a lock needed by a high-priority process • Convoying: When a process holding a lock is de-scheduled (e.g. page fault, no more quantum), no forward progress for other processes capable of running • Deadlock (or Livelock): Processes attempt to lock the same set of objects in different orders (could be bugs by programmers) • Error-prone

  30. Using Transactions • What is a transaction? • A sequence of instructions that is guaranteed to execute and complete only as an atomic unit Begin Transaction Inst #1 Inst #2 Inst #3 … End Transaction • Satisfy the following properties • Serializability: Transactions appear to execute serially. • Atomicity (or Failure-Atomicity): A transaction either • commits changes when complete, visible to all; or • aborts, discarding changes (will retry again) • Isolation: concurrently executing threads cannot affect the result of a transaction, so a transaction produces the same result as when no other task was executing

  31. TCC (Stanford) [Hammond et al. ISCA 2004] • Transactional Coherence and Consistency • Programmer-defined groups of instructions within a program Begin Transaction Start Buffering Results Inst #1 Inst #2 Inst #3 … End Transaction Commit Results Now • Only commit machine state at the end of each transaction • Each must update machine state atomically, all at once • To other processors, all instructions within one transaction appear to execute only when the transaction commits • These commits impose an order on how processors may modify machine state

  32. Transaction Code Example • MIT LTM instruction set xstart: XBEGIN on_abort lw r1, 0(r2) addi r1, r1, 1 . . . XEND . . . on_abort: … // back off j xstart // retry

  33. Transaction A Transaction C Transaction B ld 0xdddd ... st 0xbeef ld 0xbeef ld 0xdddd ... ld 0xbbbb Arbitrate Commit Violation! Arbitrate ld 0xbeef Commit Re-execute with new data Transactional Memory • Transactions appear to execute in commit order • Flow (RAW) dependency cause transaction violation and restart Time 0xbeef 0xbeef

  34. Transaction Atomicity Init MEM T0 What are the values when T0 and T1 are atomically executed? A 0 T1 Load r = A T0  T1 A = 10 T1  T0 A = 10 Load r = A Add r = r + 5 Add r = r + 5 Store A = r Store A = r

  35. Transaction Atomicity Init MEM T0 What are the values when T0 and T1 are atomically executed? A 0 T1 Load r = A Load r = A Add r = r + 5 Add r = r + 5 Store A = r Time Store A = r

  36. Transaction Atomicity Init MEM T0 What are the values when T0 and T1 are atomically executed? A 0 T1 T0  T1 A = 7 T1  T0 A = 2 Load r = A Store A = 2 Add r = r + 5 Store A = r

  37. Transaction Atomicity Init MEM T1 tries to be atomic, unfortunately, some operation modified the shared var A in the middle. T0 A 0 T1 Load r = A Store A = 2 T1: r = 0 Time Add r = r + 5 A = 2 T1: r = 0+5 = 5 T1: A = 5 Store A = r

  38. Transaction Atomicity Init MEM T0 What are the values when T0 and T1 are atomically executed? A 0 T1 T0  T1 A = 11 T1  T0 A = 2 Store A = 9 Store A = 2 Load r = A Add r = r + 2 Store A = r

  39. Arbitrate Commit Transaction Atomicity T1 T2 T0 Load r = A Store A = Y ReadSet = {X,Y} WriteSet ={A} ReadSet = {A} WriteSet ={} Time ReadSet = {B,C} WriteSet ={A}

  40. Hardware Transactional Memory Taxonomy Conflict Detection • Write set against another thread’s read set and write set • Lazy • Wait till last minute • Eager • Check on each write • Squash during a transaction Version Management • Where to put speculative data • Lazy • Into speculative buffer (assuming transaction will abort) • No rollback needed when abort • Eager • Into cache hierarchy (assuming transaction will commit • No data copy needed when go through

  41. HTM Taxonomy [LogTM 2006]

  42. TCC System • Similar to prior thread-level speculation (TLS) techniques • CMU Stampede • Stanford Hydra • Wisconsin Multiscalar • UIUC speculative multithreading CMP • Loosely coupled TLS system • Completely eliminates conventional cache coherence and consistency models • No MESI-style cache coherence protocol • But require new hardware support

  43. The TCC Cycle • Transactions run in a cycle • Speculatively execute code and buffer • Wait for commit permission • Phase provides synchronization, if necessary (assigned phase number, oldest phase commit first) • Arbitrate with other processors • Commit stores together (as a packet) • Provides a well-defined write ordering • Can invalidate or update other caches • Large packet utilizes bandwidth effectively • And repeat

  44. Advantages of TCC • Trades bandwidth for simplicity and latency tolerance • Easier to build • Not dependent on timing/latency of loads and stores • Transactions eliminate locks • Transactions are inherently atomic • Catches most common parallel programming errors • Shared memory consistency is simplified • Conventional model sequences individual loads and stores • Now only have hardware sequence transaction commits • Shared memory coherence is simplified • Processors may have copies of cache lines in any state (no MESI !) • Commit order implies an ownership sequence

  45. How to Use TCC • Divide code into potentially parallel tasks • Usually loop iterations • For initial division, tasks = transactions • But can be subdivided up or grouped to match HW limits (buffering) • Similar to threading in conventional parallel programming, but: • We do not have to verify parallelism in advance • Locking is handled automatically • Easier to get parallel programs running correctly • Programmer then orders transactions as necessary • Ordering techniques implemented using phase number • Deadlock-free (At least one transaction is the oldest one) • Livelock-free (watchdog HW can easily insert barriers anywhere)

  46. How to Use TCC • Three common ordering scenarios • Unordered for purely parallel tasks • Fully ordered to specify sequential task (algorithm level) • Partially ordered to insert synchronization like barriers

  47. Basic TCC Transaction Control Bits • In each local cache • Read bits (per cache line, or per word to eliminate false sharing) • Set on speculative loads • Snooped by a committing transaction (writes by other CPU) • Modified bits (per cache line) • Set on speculative stores • Indicate what to rollback if a violation is detected • Different from dirty bit • Renamed bits (optional) • At word or byte granularity • To indicate local updates (RAW) that do not cause a violation • Subsequent reads that read lines with these bits set, they do NOT set read bits because local RAW is not considered a violation

  48. During A Transaction Commit • Need to collect all of the modified caches together into a commit packet • Potential solutions • A separate write buffer, or • An address buffer maintaining a list of the line tags to be committed • Size? • Broadcast all writes out as one single (large) packet to the rest of the system

  49. Re-execute A Transaction • Rollback is needed when a transaction cannot commit • Checkpoints needed prior to a transaction • Checkpoint memory • Use local cache • Overflow issue • Conflict or capacity misses require all the victim lines to be kept somewhere (e.g. victim cache) • Checkpoint register state • Hardware approach: Flash-copying rename table / arch register file • Software approach: extra instruction overheads

  50. Sample TCC Hardware • Write buffers and L1 Transaction Control Bits • Write buffer in processor, before broadcast • A broadcast bus or network to distribute commit packets • All processors see the commits in a single order • Snooping on broadcasts triggers violations, if necessary • Commit arbitration/sequence logic

More Related