1 / 16

Speculative Data-Driven Multithreading (an implementation of pre-execution)

Speculative Data-Driven Multithreading (an implementation of pre-execution). Amir Roth and Gurindar S. Sohi HPCA-7 Jan. 22, 2001. Pre-Execution. Goal : high single-thread performance Problem : μ architectural latencies of “problem” instructions (PIs)

keola
Télécharger la présentation

Speculative Data-Driven Multithreading (an implementation of pre-execution)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Speculative Data-Driven Multithreading (an implementation of pre-execution) Amir Roth and Gurindar S. Sohi HPCA-7 Jan. 22, 2001 "DDMT", Amir Roth, HPCA-7

  2. Pre-Execution • Goal: high single-thread performance • Problem: μarchitectural latencies of “problem” instructions (PIs) • Memory: cache misses, Pipeline: mispredicted branches • Solution: decouple μarchitectural latencies from main thread • Execute copies of PI computations in parallel with whole program • Copies execute PIs faster than main thread  “pre-execute” • Why? Fewer instructions • Initiate Cache misses earlier • Pre-computed branch outcomes, relay to main thread • DDMT: an implementation of pre-execution "DDMT", Amir Roth, HPCA-7

  3. Pre-Execution is a Supplement • Fundamentally: tolerating non-execution latencies (pipeline, memory) requires values faster than execution can provide them • Ways of providing values faster than execution • Old: Behavioral prediction: table-lookup • + small effort per value, - less than perfect (“problems”) • New: Pre-execution: executes fewer instructions • + perfect accuracy, - more effort per value • Solution: supplement behavioral prediction with pre-execution • Key: behavioral prediction must handle majority of cases • Good news: it already does "DDMT", Amir Roth, HPCA-7

  4. Data-Driven Multithreading (DDMT) • DDMT: an implementation of pre-execution • Data-Driven Thread (DDT): pre-executed computation of PI • Implementation: extension to simultaneous multithreading (SMT) • SMT is a reality (21464) • Low static cost: minimal additional hardware • Pre-execution siphons execution resources • Low dynamic cost: fine-grain, flexible bandwidth partitioning • Take only as much as you need • Minimize contention, overhead • Paper: Metrics, algorithms and mechanics • Talk: Mostly mechanics "DDMT", Amir Roth, HPCA-7

  5. Talk Outline • Working example in 3 parts • Some details • Numbers, numbers, numbers "DDMT", Amir Roth, HPCA-7

  6. I3 PC INST ldq r1, 0(r1) for (node=list; node; node = node->next) I5 br I1 if (node->neighbor != NULL) STATIC CODE I1 beq r1, I12 node->val -= node->neighbor->val * node->coeff; I2 ldq r2, 8(r1) I3 beq r2, I10 I4 ldt f0, 16(r1) I5 ldt f1, 16(r2) I6 ldt f2, 24(r1) I7 mult f1, f2, f3 DYNAMIC INSN STREAM I8 subt f0, f3, f0 I9 stt f0, 16(r1) I10 ldq r1, 0(r1) I11 br I1 I1 beq r1, I12 I2 ldq r2, 8(r1) I3 beq r2, I10 I4 ldt f0, 16(r1) I5 ldt f1, 16(r2) Example.1 Identify PIs • Use profiling to find PIs • Running example: same as the paper • Simplified loop from EM3D • Few static PIs cause most dynamic “problems” • Good coverage with few static DDTs "DDMT", Amir Roth, HPCA-7

  7. I10 PC I10 ldq r1, 0(r1) INST ldq r1, 0(r1) I11 br I1 I10 PC ldq r1, 0(r1) INST DDTC (static) I1 beq r1, I12 I2 ldq r2, 8(r1) I10: I2 ldq r2, 8(r1) I3 beq r2, I10 I3 beq r2, I10 I5 ldt f1, 16(r2) I4 ldt f0, 16(r1) I5 ldt f1, 16(r2) I6 ldt f2, 24(r1) I7 mult f1, f2, f3 I8 subt f0, f3, f0 I9 stt f0, 16(r1) I10 I10 ldq r1, 0(r1) ldq r1, 0(r1) I11 br I1 I1 beq r1, I12 I2 I2 ldq r2, 8(r1) ldq r2, 8(r1) I3 I3 beq r2, I10 beq r2, I10 I4 ldt f0, 16(r1) I5 I5 ldt f1, 16(r2) ldt f1, 16(r2) Example.2 Extract DDTs • Use first instruction as DDT trigger • Dynamic trigger instances signal DDT fork • Examine program traces • Start with PIs • Work backwards, gather backward-slices • Eventually stop. When? (see paper) • Pack last N-1 slice instructions into DDT • Load DDT into DDTC (DDT$) "DDMT", Amir Roth, HPCA-7

  8. PC I10 ldq r1, 0(r1) INST I11 br I1 PC I10 I10 PC MT INST ldq r1, 0(r1) DDTC (static) DDT INST ldq r1, 0(r1) I1 beq r1, I12 I2 ldq r2, 8(r1) … I10: I2 ldq r2, 8(r1) I3 beq r2, I10 I3 beq r2, I10 I5 ldt f1, 16(r2) I4 ldt f0, 16(r1) I5 ldt f1, 16(r2) I6 ldt f2, 24(r1) I7 mult f1, f2, f3 I8 subt f0, f3, f0 I9 stt f0, 16(r1) I10 ldq r1, 0(r1) I11 br I1 I1 beq r1, I12 I2 ldq r2, 8(r1) I3 beq r2, I10 I4 ldt f0, 16(r1) I5 ldt f1, 16(r2) Example.3 Pre-Execute DDTs • Main thread (MT) • Executed a trigger instr? • Fork DDT (μarch) • MT, DDT execute in parallel • DDT initiates cache miss • “Absorbs” latency • MT integrates DDT results • Instr’s not re-executed  reduces contention • Shortens MT critical path • Pre-computed branch avoids mis-prediction "DDMT", Amir Roth, HPCA-7

  9. I10 I10 I10 PC PC I10 I10 ldq r1, 0(r1) ldq r1, 0(r1) INST ldq r1, 0(r1) ldq r1, 0(r1) ldq r1, 0(r1) INST I10: I2 I2 ldq r2, 8(r1) ldq r2, 8(r1) I3 I3 beq r2, I10 beq r2, I10 I5 I5 ldt f1, 16(r2) ldt f1, 16(r2) I10: Details.1 More About DDTs • Composed of instr’s from original program • Required by integration • Should look like normal instr’s to processor • Data-driven: instructions are not “sequential” • No explicit control-flow • How are they sequenced? • Pack into “traces” (in DDTC) • Execute all instructions (branches too) • Save results for integration • No runaway threads, better overhead control • “Contain” any control-flow (e.g. unrolled loops) "DDMT", Amir Roth, HPCA-7

  10. ldq r1, 0(r1) br I1 beq r1, I12 ldq r1, 0(r1) ldq r2, 8(r1) ldq r2, 8(r1) beq r2, I10 r1 ldt f0, 16(r1) ldt f1, 16(r2) ldt f2, 24(r1) mult f1, f2, f3 subt f0, f3, f0 stt f0, 16(r1) ldq r1, 0(r1) br I1 beq r1, I12 ldq r2, 8(r1) Details.2 More About Integration • Centralized physical register file • Use for DDT-MT communication • Fork: copy MT rename map to DDT • DDT locates MT values via lookups • “Roots” integration ldq r1, 0(r1) ldqr2, 8(r1) • Integration: match PC/physical registers to establish DDT-MT instruction correspondence • MT “claims” physical registers allocated by DDT • Modify register-renaming to do this ldq r1, 0(r1) ldq r2, 8(r1) More on integration: implementation of squash-reuse [MICRO-33] "DDMT", Amir Roth, HPCA-7

  11. Details.3 More About DDT Selection • Very important problem • Very important problem • Fundamental aspects • Metrics, algorithms • Promising start • See paper • Practical aspects • Who implements algorithm? How do DDTs get into DDTC? • Paper: profile-driven, offline, executable annotations • Open question "DDMT", Amir Roth, HPCA-7

  12. Performance Evaluation • SPEC2K, Olden, Alpha EV6, –O3 –fast • Chose programs with problems • SimpleScalar-based simulation environment • DDT selection phase: functional simulation on small input • DDT measurement phase: timing simulation on larger input • 8-wide, superscalar, out-of-order core • 128 ROB, 64 LDQ, 32 STQ, 80 RS (shared) • Pipe: 3 fetch, 2 rename/integrate, 2 schedule, 2 reg read, 2 load • 32KB I$/64KB D$ (2-way), 1MB L2$ (4-way), mem b/w: 8 b/cyc. • DDTC: 16 DDTs, 32 instructions (max) per DDT "DDMT", Amir Roth, HPCA-7

  13. Branch mispredictions • Speedups lower, 5-10% • More PIs, lower coverage • Branch integration != perfect branch prediction • Effects mix Numbers.1 The Bottom Line • Cache misses • Speedups vary, 10-15% • DDT “unrolling”: increases latency tolerance (paper) "DDMT", Amir Roth, HPCA-7

  14. DDT utility: integration rates • Vary, mostly ~30% (low) • Completed: well done • Not completed: a little late • Input-set differences • Greedy DDTs, no early exits Numbers.2 A Closer Look • DDT overhead: fetch utilization • ~5% (reasonable) • Fewer MT fetches (always) • Contention • Fewer total fetches • Early branch resolution "DDMT", Amir Roth, HPCA-7

  15. Numbers.3 Is Full DDMT necessary? • How important is integration? • Important for branch resolution, less so for prefetching • How important is decoupling? What if we just priority-scheduled? • Extremely. No data-driven sequencing  no speedup "DDMT", Amir Roth, HPCA-7

  16. Summary • Pre-execution: supplements behavioral prediction • Decouple/absorb μarchitectural latencies of “problem” instructions • DDMT: an implementation of pre-execution • An extension to SMT • Few hardware changes • Bandwidth allocation flexibility reduces overhead • Future: DDT selection • Fundamental: better metrics/algorithms to increase utility/coverage • Practical: an easy implementation • Coming to a university/research-lab near you "DDMT", Amir Roth, HPCA-7

More Related