1 / 47

Chapter 2

Chapter 2. Instruction Level Parallelism. Instruction Level Parallelism (ILP). ILP: Overlap execution of unrelated instructions gcc 17% control transfer 5 instructions + 1 branch Beyond single block to get more instruction level parallelism Loop level parallelism one opportunity, SW and HW

ponce
Télécharger la présentation

Chapter 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 Instruction Level Parallelism

  2. Instruction Level Parallelism (ILP) • ILP: Overlap execution of unrelated instructions • gcc 17% control transfer • 5 instructions + 1 branch • Beyond single block to get more instruction level parallelism • Loop level parallelism one opportunity, SW and HW • DLX Floating Point as example • Measurements suggests R4000 performance FP execution has room for improvement

  3. FP Loop: Where are the Hazards? Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0

  4. 1 IF 2 ID IF 3 EX ID IF 4 M S S 5 W A1 ID IF 6 A2 EX ID IF 7 A3 S S S 8 A4 S S S 9 M M EX ID IF 10 W W M EX ID IF LD F0,0(R1) ADDD F4,F0,F2 SD 0(R1),F4 SUBI R1,R1,8 BNEZ R1,Loop NOP Instructions (order) Time (cycles)

  5. FP Loop Showing Stalls 1 Loop: LD F0,0(R1) ;F0=vector element 2 stall 3 ADDD F4,F0,F2 ;add scalar in F2 4 stall 5 stall 6 SD 0(R1),F4 ;store result 7 SUBI R1,R1,8 ;decrement pointer 8B (DW) 8 BNEZ R1,Loop ;branch R1!=zero 9 stall ;delayed branch slot • 9 clocks: Rewrite code to minimize stalls? Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1

  6. Revised FP Loop Minimizing Stalls 1 Loop: LD F0,0(R1) 2 stall 3 ADDD F4,F0,F2 4 SUBI R1,R1,8 5 BNEZ R1,Loop ;delayed branch 6 SD 8(R1),F4;altered when move past SUBI 6 clocks: Unroll loop 4 times code to make faster? Swap BNEZ and SD by changing address of SD Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1

  7. Unroll Loop Four Times (straightforward way) 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 ;drop SUBI & BNEZ 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 ;drop SUBI & BNEZ 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP 15 + 4 x (1+2) = 27 clock cycles, or 6.8 per iteration Assumes R1 is multiple of 4

  8. Unrolled Loop That Minimizes Stalls 1 Loop: LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 11 SD -16(R1),F12 12 SUBI R1,R1,#32 13 BNEZ R1,LOOP 14 SD 8(R1),F16 ; 8-32 = -24 14 clock cycles, or 3.5 per iteration When safe to move instructions? • What assumptions made when moved code? • OK to move store past SUBI even though changes register • OK to move loads before stores: get right data? • When is it safe for compiler to do such changes?

  9. Where are the data dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SUBI R1,R1,8 4 BNEZ R1,Loop ;delayed branch 5 SD 8(R1),F4 ;altered when move past SUBI

  10. Compiler Perspectives on Code Movement • Another kind of dependence called name dependence: two instructions use same name (register or memory location) but don’t exchange data • Antidependence (WAR is a hazard for HW) • Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first • Output dependence (WAW is a hazard for HW) • Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved. i:  RX j: RX  i: RX  j: RX 

  11. Where are the name dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 ;drop SUBI & BNEZ 4 LD F0,-8(R1) 5 ADDD F4,F0,F2 6 SD -8(R1),F4 ;drop SUBI & BNEZ 7 LD F0,-16(R1) 8 ADDD F4,F0,F2 9 SD -16(R1),F4 ;drop SUBI & BNEZ 10 LD F0,-24(R1) 11 ADDD F4,F0,F2 12 SD -24(R1),F4 13 SUBI R1,R1,#32 ;alter to 4*8 14 BNEZ R1,LOOP 15 NOP How can remove them? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 10 LD F14,-24(R1) 11 ADDD F16,F14,F2 12 SD -24(R1),F16 13 SUBI R1,R1,#32 14 BNEZ R1,LOOP 15 NOP Called “register renaming”

  12. Compiler Perspectives on Code Movement • Again Name Dependencies are Hard for Memory Accesses • Does 100(R4) = 20(R6)? • From different loop iterations, does 20(R6) = 20(R6)? • Our example required compiler to know that if R1 doesn’t change then:0(R1) ≠ -8(R1) ≠ -16(R1) ≠ -24(R1) There were no dependencies between some loads and stores so they could be moved by each other

  13. Compiler Perspectives on Code Movement • Final kind of dependence called control dependence • Example if p1 {S1;}; if p2 {S2;}; S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1.

  14. Compiler Perspectives on Code Movement • Two (obvious) constraints on control dependences: • An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. • An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch. • Control dependencies relaxed to get parallelism; get same effect if preserve order of exceptions (address in register checked by branch before use) and data flow (value in register depends on branch)

  15. Where are the control dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 SUBI R1,R1,8 5 BEQZ R1,exit 6 LD F0,0(R1) 7 ADDD F4,F0,F2 8 SD 0(R1),F4 9 SUBI R1,R1,8 10 BEQZ R1,exit 11 LD F0,0(R1) 12 ADDD F4,F0,F2 13 SD 0(R1),F4 14 SUBI R1,R1,8 15 BEQZ R1,exit ....

  16. When Safe to Unroll Loop? • Example: Where are data dependencies? (A,B,C distinct & nonoverlapping)for (i=1; i<=100; i=i+1) {A[i+1] = A[i] + C[i]; /* S1 */B[i+1] = B[i] + A[i+1];} /* S2 */ 1. S2 uses the value, A[i+1], computed by S1 in the same iteration. 2. S1 uses a value computed by S1 in an earlier iteration, since iteration i computes A[i+1] which is read in iteration i+1. The same is true of S2 for B[i] and B[i+1]. This is a “loop-carried dependence”: between iterations • Implies that iterations are dependent, and can’t be executed in parallel • Not the case for our prior example; each iteration was distinct

  17. HW Schemes: Instruction Parallelism • Why in HW at run time? • Works when can’t know real dependence at compile time • Compiler simpler • Code for one machine runs well on another • Key idea: Allow instructions behind stall to proceed DIVD F0,F2,F4 ADDD F10,F0,F8 SUBD F12,F8,F14 • Enables out-of-order execution => out-of-order completion • ID stage checked both for structural Scoreboard dates to CDC 6600 in 1963

  18. Static Branch Prediction Code around delayed branch To reorder code around branches, need to predict branch statically when compile Simplest scheme is to predict a branch as taken Average misprediction = untaken branch frequency = 34% SPEC • More accurate scheme predicts branches using profile information collected from earlier runs, and modify prediction based on last run: Integer Floating Point

  19. Dynamic Branch Prediction Why does prediction work? Underlying algorithm has regularities Data that is being operated on has regularities Instruction sequence has redundancies that are artifacts of way that humans/compilers think about problems Is dynamic branch prediction better than static branch prediction? There are a small number of important branches in programs which have dynamic behavior

  20. Dynamic Branch Prediction Performance = ƒ(accuracy, cost of misprediction) Branch History Table (BHT) is simplest Lower bits of PC address index table of 1-bit values Says whether or not branch taken last time No address check Problem: in a loop, 1-bit BHT will cause two mispredictions (avg is 9 iterations before exit): End of loop case, when it exits instead of looping as before First time through loop on next time through code, when it predicts exit instead of looping

  21. Dynamic Branch Prediction Solution: 2-bit scheme where change prediction only if get misprediction twice Red: stop, not taken Green: go, taken Taken Not taken Predicted Taken Predicted Taken Taken Not taken Taken Not taken Predicted not Taken Predicted not Taken Taken Not taken

  22. Dynamic Branch Prediction BHT Taken Not taken Predicted Taken Predicted Taken Taken Taken Not taken Not taken Predicted not Taken Predicted not Taken Taken Not taken

  23. BHT Accuracy Mispredict, reasons: Wrong guess for that branch Got branch history of wrong branch when index the table 4096 entry table programs vary from 1% misprediction (nasa7, tomcatv) to 18% (eqntott), with spice at 9% and gcc at 12% 4096 about as good as infinite table(in Alpha 211164)

  24. Example if ( d = =0 ) b1 d = 1 if ( d = = 1) b2

  25. Possible sequence

  26. 1-bit predictor 2 NT T T NT T T 0 T NT NT T NT NT b1 prediction b1 action New b1 prediction b2 prediction b2 action New b2 prediction d 2 NT T T NT T T 0 T NT NT T NT NT

  27. Correlating Branches Hypothesis: recent branches are correlated; that is, behavior of recently executed branches affects prediction of current branch Idea: record mmost recently executed branches as taken or not taken, and use that pattern to select the proper branch history table In general, (m,n) predictor means record last m branches to select between 2m history tables each with n-bit counters Our old 2-bit BHT is then a (0,2) predictor

  28. (1,1) 2 T/NT T T/NT NT/T T NT/T 0 T/NT NT T/NT NT/T NT NT/T NT T Last branch b1 prediction b1 action New b1 prediction b2 prediction b2 action New b2 prediction d 2 NT/NT T T/NT NT/NT T NT/T 0 T/NT NT T/NT NT/T NT NT/T

  29. Correlating Branches (2,2) predictor The behavior of recent branches selects between four predictions of next branch, and updating just that prediction • Branch address 2-bits per branch predictors NT T  last branch Prediction NTT NTT Previous to last branch i-1 branch: Not Taken i-2 branch: Taken

  30. Correlating Branches • (2,2) predictor • – Behavior of recent branches selects between four predictions of next branch, updating just that prediction Branch address 4 2-bits per branch predictor Prediction 2-bit global branch history

  31. Accuracy of Different Schemes 4096 Entries 2-bit BHT Unlimited Entries 2-bit BHT 1024 Entries (2,2) BHT Frequency of Mispredictions

  32. Re-evaluating Correlation Several of the SPEC benchmarks have less than a dozen branches responsible for 90% of taken branches: program branch % static # = 90% compress 14% 236 13 eqntott 25% 494 5 gcc 15% 9531 2020 mpeg 10% 5598 532 real gcc 13% 17361 3214 Real programs + OS more like gcc Small benefits beyond benchmarks for correlation?

  33. Need Address at Same Time as Prediction Branch Target Buffer (BTB): Address of branch index to get prediction AND branch address (if taken) Note: must check for branch match now, since can’t use wrong branch address () Return instruction addresses predicted with stack

  34. Branch Target Buffer(Section 2.9 textbook) PC of instruction to fetch T/NT prediction Predicted PC Number of entries in BTB No Instruction is not predicted to be a branch. = Yes Instruction is a branch and predicted

  35. Instruction Fetch (stage) Send PC to Instruction Memory and Branch Target Buffer (BTB) No Yes Entry found in BTB ?

  36. Address is not in BTB No Yes Entry found in BTB ? IF No Yes Is instruction a taken branch? ID Normal Instruction execution Enter branch address and next PC into BTB EX

  37. Address is in BTB No Yes Entry found in BTB ? IF Send out predicted PC No Yes taken branch? ID Mispredicted branch Kill fetch; restart fetch; delete entry from BTB NO STALLS EX

  38. Tournament Predictors Motivation for correlating branch predictors is 2-bit predictor failed on important branches; by adding global information, performance improved Tournament predictors: use 2 predictors, 1 based on global information and 1 based on local information, and combine with a selector Hopes to select right predictor for right branch (or right context of branch)

  39. Tournament Predictor in Alpha 21264 4K 2-bit counters to choose from among a global predictor and a local predictor Global predictor also has 4K entries and is indexed by the history of the last 12 branches; each entry in the global predictor is a standard 2-bit predictor 12-bit pattern: ith bit 0 => ith prior branch not taken; ith bit 1 => ith prior branch taken; 1 2 3 . 12 . . 4K  2 bits

  40. Tournament Predictor in Alpha 21264 Local predictor consists of a 2-level predictor: Top level a local history table consisting of 1024 10-bit entries; each 10-bit entry corresponds to the most recent 10 branch outcomes for the entry. 10-bit history allows patterns 10 branches to be discovered and predicted. Next level Selected entry from the local history table is used to index a table of 1K entries consisting a 3-bit saturating counters, which provide the local prediction Total size: 4K*2 + 4K*2 + 1K*10 + 1K*3 = 29K bits! (~180,000 transistors) 1K  3 bits 1K  10 bits

  41. Accuracy v. Size (SPEC89)

  42. 2-bit counter predictor selector Predictor 2 Predictor 1

  43. Selective History Predictor 8096 x 2 bits 1 0 Taken/Not Taken 11 10 01 00 Choose Non-correlator Branch Addr Choose Correlator 2 Global History 00 8K x 2 bit Selector 01 10 11 11 Taken 10 01 Not Taken 00 2048 x 4 x 2 bits

  44. Gselect and Gshare predictors Keep a global register (GR) with outcome of k branches Use that in conjunction with PC to index into a table containing 2-bit predictor Gselect – concatenate Gshare – XOR (better) (PHT) Pattern History Table

  45. Avoid branch prediction by turning branches into conditionally executed instructions: if (x) then A = B op C else NOP If false, then neither store result nor cause interference Expanded ISA of Alpha, MIPS, PowerPC, SPARC have conditional move; PA-RISC can annul any following instruction Drawbacks to conditional instructions Still takes a clock even if “annulled” Stall if condition evaluated late: Complex conditions reduce effectiveness since condition becomes known late in pipeline Predicated Execution x A = B op C

  46. Types of Branches

  47. Special Case: Return Addresses Register Indirect branch - hard to predict address SPEC89 85% such branches for procedure return Since stack discipline for procedures, save return address in small buffer that acts like a stack: 8 to 16 entries has small miss rate, return address stack (RAS)

More Related