1 / 58

Computer Architecture

Computer Architecture. Chapter 4 Instruction-Level Parallelism - 3 Prof. Jerry Breecher CS 240 Fall 2003. Chapter Overview. 4.1 Compiler Techniques for Exposing ILP 4.2 Static Branch Prediction 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP

dagan
Télécharger la présentation

Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Architecture Chapter 4 Instruction-Level Parallelism - 3 Prof. Jerry Breecher CS 240 Fall 2003

  2. Chapter Overview 4.1 Compiler Techniques for Exposing ILP 4.2 Static Branch Prediction 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP 4.5 Hardware Support for Exposing more Parallelism Chap. 4 - ILP 3

  3. Ideas To Reduce Stalls Chapter 3 Chapter 4 Chap. 4 - ILP 3

  4. Instruction Level Parallelism How can compilers recognize and take advantage of ILP? 4.1 Compiler Techniques for Exposing ILP 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP 4.5 Hardware Support for Exposing more Parallelism Chap. 4 - ILP 3

  5. Compilers and ILP Pipeline Scheduling and Loop Unrolling Simple Loop and its Assembler Equivalent for (i=1; i<=1000; i++) x(i) = x(i) + s; This is a clean and simple example! Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar from F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8bytes (DW) BNEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot Chap. 4 - ILP 3

  6. Compilers and ILP Pipeline Scheduling and Loop Unrolling FP Loop Hazards Loop: LD F0,0(R1) ;F0=vector element ADDD F4,F0,F2 ;add scalar in F2 SD 0(R1),F4 ;store result SUBI R1,R1,8 ;decrement pointer 8B (DW) BNEZ R1,Loop ;branch R1!=zero NOP ;delayed branch slot Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 Where are the stalls? Chap. 4 - ILP 3

  7. Compilers and ILP Pipeline Scheduling and Loop Unrolling FP Loop Showing Stalls 1 Loop: LD F0,0(R1) ;F0=vector element 2 stall 3 ADDD F4,F0,F2 ;add scalar in F2 4 stall 5 stall 6 SD 0(R1),F4 ;store result 7 SUBI R1,R1,8 ;decrement pointer 8Byte (DW) 8 stall 9 BNEZ R1,Loop ;branch R1!=zero 10 stall ;delayed branch slot 10 clocks: Rewrite code to minimize stalls? Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Load double Store double 0 Integer op Integer op 0 Chap. 4 - ILP 3

  8. Compilers and ILP Pipeline Scheduling and Loop Unrolling Scheduled FP Loop Minimizing Stalls 1 Loop: LD F0,0(R1) 2 SUBI R1,R1,8 3 ADDD F4,F0,F2 4 stall 5 BNEZ R1,Loop ;delayed branch 6 SD 8(R1),F4;altered when move past SUBI Now 6 clocks: Now unroll loop 4 times to make faster. Stall is because SD can’t proceed. Swap BNEZ and SD by changing address of SD Instruction Instruction Latency inproducing result using result clock cycles FP ALU op Another FP ALU op 3 FP ALU op Store double 2 Load double FP ALU op 1 Chap. 4 - ILP 3

  9. Compilers and ILP Pipeline Scheduling and Loop Unrolling Unroll Loop Four Times (straightforward way) 1 Loop: LD F0,0(R1) 2 stall 3 ADDD F4,F0,F2 4 stall 5 stall 6 SD 0(R1),F4 7 LD F6,-8(R1) 8 stall 9 ADDD F8,F6,F2 10 stall 11 stall 12 SD -8(R1),F8 13 LD F10,-16(R1) 14 stall 15 ADDD F12,F10,F2 16 stall 17 stall 18 SD -16(R1),F12 19 LD F14,-24(R1) 20 stall 21 ADDD F16,F14,F2 22 stall 23 stall 24 SD -24(R1),F16 25 SUBI R1,R1,#32 26 BNEZ R1,LOOP 27 stall 28 NOP Rewrite loop to minimize stalls. 15 + 4 x (1+2) +1 = 28 clock cycles, or 7 per iteration Assumes R1 is multiple of 4 Chap. 4 - ILP 3

  10. Compilers and ILP Pipeline Scheduling and Loop Unrolling Unrolled Loop That Minimizes Stalls 1 Loop: LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 11 SD -16(R1),F12 12 SUBI R1,R1,#32 13 BNEZ R1,LOOP 14 SD 8(R1),F16 ; 8-32 = -24 14 clock cycles, or 3.5 per iteration What assumptions made when moved code? • OK to move store past SUBI even though changes register • OK to move loads before stores: get right data? • When is it safe for compiler to do such changes? No Stalls!! Chap. 4 - ILP 3

  11. Compilers and ILP Pipeline Scheduling and Loop Unrolling Summary of Loop Unrolling Example • Determine that it was legal to move the SD after the SUBI and BNEZ, and find the amount to adjust the SD offset. • Determine that unrolling the loop would be useful by finding that the loop iterations were independent, except for the loop maintenance code. • Use different registers to avoid unnecessary constraints that would be forced by using the same registers for different computations. • Eliminate the extra tests and branches and adjust the loop maintenance code. • Determine that the loads and stores in the unrolled loop can be interchanged by observing that the loads and stores from different iterations are independent. This requires analyzing the memory addresses and finding that they do not refer to the same address. • Schedule the code, preserving any dependences needed to yield the same result as the original code. Chap. 4 - ILP 3

  12. Compilers and ILP Dependencies Compiler Perspectives on Code Movement Compiler concerned about dependencies in program. Not concerned if a HW hazard depends on a given pipeline. • Tries to schedule code to avoid hazards. • Looks for Data dependencies (RAW if a hazard for HW) • Instruction i produces a result used by instruction j, or • Instruction j is data dependent on instruction k, and instruction k is data dependent on instruction i. • If dependent, can’t execute in parallel • Easy to determine for registers (fixed names) • Hard for memory: • Does 100(R4) = 20(R6)? • From different loop iterations, does 20(R6) = 20(R6)? Chap. 4 - ILP 3

  13. Compilers and ILP Data Dependencies Compiler Perspectives on Code Movement Where are the data dependencies? 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SUBI R1,R1,8 4 BNEZ R1,Loop ;delayed branch 5 SD 8(R1),F4 ;altered when move past SUBI Chap. 4 - ILP 3

  14. Compilers and ILP Name Dependencies Compiler Perspectives on Code Movement • Another kind of dependence called name dependence: two instructions use same name (register or memory location) but don’t exchange data • Anti-dependence (WAR if a hazard for HW) • Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first • Output dependence (WAW if a hazard for HW) • Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved. Chap. 4 - ILP 3

  15. Compilers and ILP Name Dependencies Compiler Perspectives on Code Movement 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 LD F0,-8(R1) 5 ADDD F4,F0,F2 6 SD -8(R1),F4 7 LD F0,-16(R1) 8 ADDD F4,F0,F2 9 SD -16(R1),F4 10 LD F0,-24(R1) 11 ADDD F4,F0,F2 12 SD -24(R1),F4 13 SUBI R1,R1,#32 14 BNEZ R1,LOOP 15 NOP How can we remove these dependencies? Where are the name dependencies? No data is passed in F0, but can’t reuse F0 in cycle 4. Chap. 4 - ILP 3

  16. Compilers and ILP Name Dependencies Compiler Perspectives on Code Movement • Again Name Dependencies are Hard for Memory Accesses • Does 100(R4) = 20(R6)? • From different loop iterations, does 20(R6) = 20(R6)? • Our example required compiler to know that if R1 doesn’t change then:0(R1) ≠ -8(R1) ≠ -16(R1) ≠ -24(R1) There were no dependencies between some loads and stores so they could be moved around each other Chap. 4 - ILP 3

  17. Compilers and ILP Control Dependencies Compiler Perspectives on Code Movement • Final kind of dependence called control dependence • Example if p1 {S1;}; if p2 {S2;}; S1 is control dependent on p1 and S2 is control dependent on p2 but not on p1. Chap. 4 - ILP 3

  18. Compilers and ILP Control Dependencies Compiler Perspectives on Code Movement • Two (obvious) constraints on control dependences: • An instruction that is control dependenton a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. • An instruction that is not control dependenton a branch cannot be moved to after the branch so that its execution is controlled by the branch. • Control dependencies relaxed to get parallelism; get same effect if preserve order of exceptions (address in register checked by branch before use) and data flow (value in register depends on branch) Chap. 4 - ILP 3

  19. Compilers and ILP Control Dependencies Where are the control dependencies? Compiler Perspectives on Code Movement 1 Loop: LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 SUBI R1,R1,8 5 BEQZ R1,exit 6 LD F0,0(R1) 7 ADDD F4,F0,F2 8 SD 0(R1),F4 9 SUBI R1,R1,8 10 BEQZ R1,exit 11 LD F0,0(R1) 12 ADDD F4,F0,F2 13 SD 0(R1),F4 14 SUBI R1,R1,8 15 BEQZ R1,exit .... Chap. 4 - ILP 3

  20. Compilers and ILP Loop Level Parallelism When Safe to Unroll Loop? • Example: Where are data dependencies? (A,B,C distinct & non-overlapping) 1. S2 uses the value, A[i+1], computed by S1 in the same iteration. 2. S1 uses a value computed by S1 in an earlier iteration, since iteration i computes A[i+1] which is read in iteration i+1. The same is true of S2 for B[i] and B[i+1]. This is a “loop-carried dependence” between iterations • Implies that iterations are dependent, and can’t be executed in parallel • Note the case for our prior example; each iteration was distinct for (i=1; i<=100; i=i+1) { A[i+1] = A[i] + C[i]; /* S1 */ B[i+1] = B[i] + A[i+1]; /* S2 */} Chap. 4 - ILP 3

  21. Compilers and ILP Loop Level Parallelism When Safe to Unroll Loop? • Example: Where are data dependencies? (A,B,C,D distinct & non-overlapping) 1. No dependence from S1 to S2. If there were, then there would be a cycle in the dependencies and the loop would not be parallel. Since this other dependence is absent, interchanging the two statements will not affect the execution of S2. 2. On the first iteration of the loop, statement S1 depends on the value of B[1] computed prior to initiating the loop. for (i=1; i<=100; i=i+1) { A[i+1] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i]; /* S2 */} Chap. 4 - ILP 3

  22. Compilers and ILP Loop Level Parallelism Now Safe to Unroll Loop? (p. 240) A[1] = A[1] + B[1]; for (i=1; i<=99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = + A[i+1] + B[i+1];} B[101] = C[100] + D[100]; for (i=1; i<=100; i=i+1) { A[i+1] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i];} /* S2 */ No circular dependencies. OLD: Loop caused dependence on B. Have eliminated loop dependence. NEW: Chap. 4 - ILP 3

  23. Compilers and ILP Example 1There are NO dependencies Loop Level Parallelism /* ***************************************************** This is the example on page 305 of Hennessy & Patterson but running on an Intel Machine ***************************************************** */ #define MAX 1000 #define ITER 100000 int main( int argc, char argv[] ) { double x[MAX + 2]; double s = 3.14159; int i, j; for ( i = MAX; i > 0; i-- ) /* Init array */ x[i] = 0; for ( j = ITER; j > 0; j-- ) for ( i = MAX; i > 0; i-- ) x[i] = x[i] + s; } Chap. 4 - ILP 3

  24. Elapsed seconds = 0.122848 Compilers and ILP This is the ICC optimized code .L2: fstpl 8(%esp,%edx,8) fldl (%esp,%edx,8) fadd %st(1), %st fldl -8(%esp,%edx,8) fldl -16(%esp,%edx,8) fldl -24(%esp,%edx,8) fldl -32(%esp,%edx,8) fxch %st(4) fstpl (%esp,%edx,8) fxch %st(2) fadd %st(4), %st fstpl -8(%esp,%edx,8) fadd %st(3), %st fstpl -16(%esp,%edx,8) fadd %st(2), %st fstpl -24(%esp,%edx,8) fadd %st(1), %st addl $-5, %edx testl %edx, %edx jg .L2 # Prob 99% fstpl 8(%esp,%edx,8) Loop Level Parallelism Example 1 Elapsed seconds = 0.590026 This is the GCC optimized code .L15: fldl (%ecx,%eax) fadd %st(1),%st decl %edx fstpl (%ecx,%eax) addl $-8,%eax testl %edx,%edx jg .L15 Chap. 4 - ILP 3

  25. Compilers and ILP Example 2 Loop Level Parallelism There are two depend-encies here – what are they? // Example on Page 320 get_current_time( &start_time ); for ( j = ITER; j > 0; j-- ) { for ( i = 1; i <= MAX; i++ ) { A[i+1] = A[i] + C[i]; B[i+1] = B[i] + A[i+1]; } } get_current_time( &end_time ); Chap. 4 - ILP 3

  26. Compilers and ILP Elapsed seconds = 0.664073 Loop Level Parallelism This is the ICC optimized code .L4: fstpl 25368(%esp,%edx,8) fldl 8472(%esp,%edx,8) faddl 16920(%esp,%edx,8) fldl 25368(%esp,%edx,8) fldl 16928(%esp,%edx,8) fxch %st(2) fstl 8480(%esp,%edx,8) fadd %st, %st(1) fxch %st(1) fstl 25376(%esp,%edx,8) fxch %st(2) faddp %st, %st(1) fstl 8488(%esp,%edx,8) faddp %st, %st(1) addl $2, %edx cmpl $1000, %edx jle .L4 # Prob 99% fstpl 25368(%esp,%edx,8) Example 2 Elapsed seconds = 1.357084 This is GCC optimized code .L55: fldl -8(%esi,%eax) faddl -8(%edi,%eax) fstl (%esi,%eax) faddl -8(%ecx,%eax) incl %edx fstpl (%ecx,%eax) addl $8,%eax cmpl $1000,%edx jle .L55 This is Microsoft optimized code $L1225: fld QWORD PTR _C$[esp+eax+40108] add eax, 8 cmp eax, 7992 fadd QWORD PTR _A$[esp+eax+40100] fst QWORD PTR _A$[esp+eax+40108] fadd QWORD PTR _B$[esp+eax+40100] fstp QWORD PTR _B$[esp+eax+40108] jle $L1225 Chap. 4 - ILP 3

  27. Compilers and ILP Example 3 Loop Level Parallelism What are the depend-encies here?? // Example on Page 321 get_current_time( &start_time ); for ( j = ITER; j > 0; j-- ) { for ( i = 1; i <= MAX; i++ ) { A[i] = A[i] + B[i]; B[i+1] = C[i] + D[i]; } } get_current_time( &end_time ); Chap. 4 - ILP 3

  28. Elapsed seconds = 0.325419 Compilers and ILP This is the ICC optimized code .L6: fstpl 8464(%esp,%edx,8) fldl 8472(%esp,%edx,8) faddl 25368(%esp,%edx,8 fldl 16920(%esp,%edx,8) faddl 33824(%esp,%edx,8) fldl 8480(%esp,%edx,8) fldl 16928(%esp,%edx,8) faddl 33832(%esp,%edx,8) fxch %st(3) fstpl 8472(%esp,%edx,8) fxch %st(1) fstl 25376(%esp,%edx,8) fxch %st(2) fstpl 25384(%esp,%edx,8) faddp %st, %st(1) addl $2, %edx cmpl $1000, %edx jle .L6 # Prob 99% fstpl 8464(%esp,%edx,8) Loop Level Parallelism Example 3 Elapsed seconds = 1.370478 This is the GCC optimized code .L65: fldl (%esi,%eax) faddl (%ecx,%eax) fstpl (%esi,%eax) movl -40100(%ebp),%edi fldl (%edi,%eax) movl -40136(%ebp),%edi faddl (%edi,%eax) incl %edx fstpl 8(%ecx,%eax) addl $8,%eax cmpl $1000,%edx jle .L65 Chap. 4 - ILP 3

  29. Compilers and ILP Example 4 Loop Level Parallelism Elapsed seconds = 1.200525 How many depend-encies here?? // Example on Page 322 get_current_time( &start_time ); for ( j = ITER; j > 0; j-- ) { A[1] = A[1] + B[1]; for ( i = 1; i <= MAX - 1; i++ ) { B[i+1] = C[i] + D[i]; A[i+1] = A[i+1] + B[i+1]; } B[101] = C[100] + D[100]; } get_current_time( &end_time ); Chap. 4 - ILP 3

  30. Compilers and ILP Loop Level Parallelism Example 4 Elapsed seconds = 1.200525 This is the GCC optimized code .L75: movl -40136(%ebp),%edi fldl -8(%edi,%eax) faddl -8(%esi,%eax) movl -40104(%ebp),%edi fstl (%edi,%eax) faddl (%ecx,%eax) incl %edx fstpl (%ecx,%eax) addl $8,%eax cmpl $999,%edx jle .L75 This is the Microsoft optimized code $L1239 fld QWORD PTR _D$[esp+eax+40108] add eax, 8 cmp eax, 7984 ;00001f30H fadd QWORD PTR _C$[esp+eax+40100] fst QWORD PTR _B$[esp+eax+40108] fadd QWORD PTR _A$[esp+eax+40108] fstp QWORD PTR _A$[esp+eax+40108] jle SHORT $L1239 Chap. 4 - ILP 3

  31. Compilers and ILP Elapsed seconds = 0.359232 Loop Level Parallelism CONTINUED fstl 25376(%esp,%edx,8) fxch %st(3) fstl 25384(%esp,%edx,8) fxch %st(1) fstl 25392(%esp,%edx,8) fxch %st(3) faddp %st, %st(4) fxch %st(3) fstpl 8480(%esp,%edx,8) faddp %st, %st(2) fxch %st(1) fstpl 8488(%esp,%edx,8) faddp %st, %st(1) addl $3, %edx cmpl $999, %edx jle .L8 fstpl 8472(%esp,%edx,8) Example 4 This is the ICC optimized code .L8: fstpl 8472(%esp,%edx,8) fldl 16920(%esp,%edx,8) faddl 33824(%esp,%edx,8) fldl 8480(%esp,%edx,8) fldl 16928(%esp,%edx,8) faddl 33832(%esp,%edx,8) fldl 8488(%esp,%edx,8) fldl 16936(%esp,%edx,8) faddl 33840(%esp,%edx,8) fldl 8496(%esp,%edx,8) fxch %st(5) Chap. 4 - ILP 3

  32. Static Multiple Issue Multiple Issue is the ability of the processor to start more than one instruction in a given cycle. Flavor I: Superscalar processors issue varying number of instructions per clock - can be either statically scheduled (by the compiler) or dynamically scheduled (by the hardware). Superscalar has a varying number of instructions/cycle (1 to 8), scheduled by compiler or by HW (Tomasulo). IBM PowerPC, Sun UltraSparc, DEC Alpha, HP 8000 4.1 Compiler Techniques for Exposing ILP 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP 4.5 Hardware Support for Exposing more Parallelism Chap. 4 - ILP 3

  33. Multiple Issue Issuing Multiple Instructions/Cycle Flavor II: VLIW - Very Long Instruction Word - issues a fixed number of instructions formatted either as one very large instruction or as a fixed packet of smaller instructions. fixed number of instructions (4-16) scheduled by the compiler; put operators into wide templates • Joint HP/Intel agreement in 1999/2000 • Intel Architecture-64 (IA-64) 64-bit address • Style: “Explicitly Parallel Instruction Computer (EPIC)” Chap. 4 - ILP 3

  34. Multiple Issue Issuing Multiple Instructions/Cycle Flavor II - continued: • 3 Instructions in 128 bit “groups”; field determines if instructions dependent or independent • Smaller code size than old VLIW, larger than x86/RISC • Groups can be linked to show independence > 3 instr • 64 integer registers + 64 floating point registers • Not separate files per functional unit as in old VLIW • Hardware checks dependencies (interlocks => binary compatibility over time) • Predicated execution (select 1 out of 64 1-bit flags) => 40% fewer mis-predictions? • IA-64 : name of instruction set architecture; EPIC is type • Merced is name of first implementation (1999/2000?) Chap. 4 - ILP 3

  35. Multiple Issue A SuperScalar Version of MIPS Issuing Multiple Instructions/Cycle • In our MIPS example, we can handle 2 instructions/cycle: • Floating Point • Anything Else – Fetch 64-bits/clock cycle; Int on left, FP on right – Can only issue 2nd instruction if 1st instruction issues – More ports for FP registers to do FP load & FP op in a pair Type Pipe Stages Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB • 1 cycle load delay causes delay to 3 instructions in Superscalar • instruction in right half can’t use it, nor instructions in next slot Chap. 4 - ILP 3

  36. Multiple Issue A SuperScalar Version of MIPS Unrolled Loop Minimizes Stalls for Scalar 1 Loop: LD F0,0(R1) 2 LD F6,-8(R1) 3 LD F10,-16(R1) 4 LD F14,-24(R1) 5 ADDD F4,F0,F2 6 ADDD F8,F6,F2 7 ADDD F12,F10,F2 8 ADDD F16,F14,F2 9 SD 0(R1),F4 10 SD -8(R1),F8 11 SD -16(R1),F12 12 SUBI R1,R1,#32 13 BNEZ R1,LOOP 14 SD 8(R1),F16 ; 8-32 = -24 14 clock cycles, or 3.5 per iteration Latencies: LD to ADDD: 1 Cycle ADDD to SD: 2 Cycles Chap. 4 - ILP 3

  37. Multiple Issue A SuperScalar Version of MIPS Loop Unrolling in Superscalar Integer instruction FP instruction Clock cycle Loop: LD F0,0(R1) 1 LD F6,-8(R1) 2 LD F10,-16(R1) ADDD F4,F0,F2 3 LD F14,-24(R1) ADDD F8,F6,F2 4 LD F18,-32(R1) ADDD F12,F10,F2 5 SD 0(R1),F4 ADDD F16,F14,F2 6 SD -8(R1),F8 ADDD F20,F18,F2 7 SD -16(R1),F12 8 SD -24(R1),F16 9 SUBI R1,R1,#40 10 BNEZ R1,LOOP 11 SD 8(R1),F20 12 • Unrolled 5 times to avoid delays (+1 due to SS) • 12 clocks, or 2.4 clocks per iteration Chap. 4 - ILP 3

  38. Multiple Issue Multiple Instruction Issue & Dynamic Scheduling Dynamic Scheduling in Superscalar Code compiler for scalar version will run poorly on Superscalar May want code to vary depending on how Superscalar Simple approach: separate Tomasulo Control for separate reservation stations for Integer FU/Reg and for FP FU/Reg Chap. 4 - ILP 3

  39. Multiple Issue Multiple Instruction Issue & Dynamic Scheduling Dynamic Scheduling in Superscalar • How to do instruction issue with two instructions and keep in-order instruction issue for Tomasulo? • Issue 2X Clock Rate, so that issue remains in order • Only FP loads might cause dependency between integer and FP issue: • Replace load reservation station with a load queue; operands must be read in the order they are fetched • Load checks addresses in Store Queue to avoid RAW violation • Store checks addresses in Load Queue to avoid WAR,WAW Chap. 4 - ILP 3

  40. Multiple Issue Multiple Instruction Issue & Dynamic Scheduling Performance of Dynamic Superscalar Iteration Instructions Issues Executes Writes result no. clock-cycle number 1 LD F0,0(R1) 1 2 4 1 ADDD F4,F0,F2 1 5 8 1 SD 0(R1),F4 2 9 1 SUBI R1,R1,#8 3 4 5 1 BNEZ R1,LOOP 4 5 2 LD F0,0(R1) 5 6 8 2 ADDD F4,F0,F2 5 9 12 2 SD 0(R1),F4 6 13 2 SUBI R1,R1,#8 7 8 9 2 BNEZ R1,LOOP 8 9 ­ 4 clocks per iteration Branches, Decrements still take 1 clock cycle Chap. 4 - ILP 3

  41. Multiple Issue VLIW Loop Unrolling in VLIW Memory Memory FP FP Int. op/ Clockreference 1 reference 2 operation 1 op. 2 branch LD F0,0(R1) LD F6,-8(R1) 1 LD F10,-16(R1) LD F14,-24(R1) 2 LD F18,-32(R1) LD F22,-40(R1) ADDD F4,F0,F2 ADDD F8,F6,F2 3 LD F26,-48(R1) ADDD F12,F10,F2 ADDD F16,F14,F2 4 ADDD F20,F18,F2 ADDD F24,F22,F2 5 SD 0(R1),F4 SD -8(R1),F8 ADDD F28,F26,F2 6 SD -16(R1),F12 SD -24(R1),F16 7 SD -32(R1),F20 SD -40(R1),F24 SUBI R1,R1,#48 8 SD -0(R1),F28 BNEZ R1,LOOP 9 • Unrolled 7 times to avoid delays • 7 results in 9 clocks, or 1.3 clocks per iteration • Need more registers to effectively use VLIW Chap. 4 - ILP 3

  42. Multiple Issue Limitations With Multiple Issue Limits to Multi-Issue Machines • Inherent limitations of ILP • 1 branch in 5 instructions => how to keep a 5-way VLIW busy? • Latencies of units => many operations must be scheduled • Need about Pipeline Depth x No. Functional Units of independent operations to keep machines busy. • Difficulties in building HW • Duplicate Functional Units to get parallel execution • Increase ports to Register File (VLIW example needs 6 read and 3 write for Int. Reg. & 6 read and 4 write for Reg.) • Increase ports to memory • Decoding SS and impact on clock rate, pipeline depth Chap. 4 - ILP 3

  43. Multiple Issue Limitations With Multiple Issue Limits to Multi-Issue Machines • Limitations specific to either SS or VLIW implementation • Decode issue in SS • VLIW code size: unroll loops + wasted fields in VLIW • VLIW lock step => 1 hazard & all instructions stall • VLIW & binary compatibility Chap. 4 - ILP 3

  44. Multiple Issue Limitations With Multiple Issue Multiple Issue Challenges • While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: • Exactly 50% FP operations • No hazards • If more instructions issue at same time, greater difficulty of decode and issue • Even 2-scalar => examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue • VLIW: tradeoff instruction space for simple decoding • The long instruction word has room for many operations • By definition, all the operations the compiler puts in the long instruction word are independent => execute in parallel • E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch • 16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide • Need compiling technique that schedules across several branches Chap. 4 - ILP 3

  45. Compiler Support For ILP • How can compilers be smart? • 1. Produce good scheduling of code. • 2. Determine which loops might contain parallelism. • 3. Eliminate name dependencies. • Compilers must be REALLY smart to figure out aliases -- pointers in C are a real problem. • Techniques lead to: • Symbolic Loop Unrolling • Critical Path Scheduling 4.1 Compiler Techniques for Exposing ILP 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP 4.5 Hardware Support for Exposing more Parallelism Chap. 4 - ILP 3

  46. Compiler Support For ILP Symbolic Loop Unrolling Software Pipelining • Observation: if iterations from loops are independent, then can get ILP by taking instructions from different iterations • Software pipelining: reorganizes loops so that each iteration is made from instructions chosen from different iterations of the original loop (Tomasulo in SW) Chap. 4 - ILP 3

  47. Compiler Support For ILP Symbolic Loop Unrolling SW Pipelining Example After: Software Pipelined LD F0,0(R1) ADDD F4,F0,F2 LD F0,-8(R1) 1 SD 0(R1),F4; Stores M[i] 2 ADDD F4,F0,F2; Adds to M[i-1] 3 LD F0,-16(R1); loads M[i-2] 4 SUBI R1,R1,#8 5 BNEZ R1,LOOP SD 0(R1),F4 ADDD F4,F0,F2 SD -8(R1),F4 Before: Unrolled 3 times 1 LD F0,0(R1) 2 ADDD F4,F0,F2 3 SD 0(R1),F4 4 LD F6,-8(R1) 5 ADDD F8,F6,F2 6 SD -8(R1),F8 7 LD F10,-16(R1) 8 ADDD F12,F10,F2 9 SD -16(R1),F12 10 SUBI R1,R1,#24 11 BNEZ R1,LOOP Read F4 Read F0 SD ADDD LD IF ID EX Mem WB IF ID EX Mem WB IF ID EX Mem WB Write F4 Write F0 Chap. 4 - ILP 3

  48. Compiler Support For ILP Symbolic Loop Unrolling SW Pipelining Example • Symbolic Loop Unrolling • Less code space • Overhead paid only once vs. each iteration in loop unrolling Software Pipelining Loop Unrolling 100 iterations = 25 loops with 4 unrolled iterations each Chap. 4 - ILP 3

  49. Compiler Support For ILP Critical Path Scheduling Trace Scheduling • Parallelism across IF branches vs. LOOP branches • Two steps: • Trace Selection • Find likely sequence of basic blocks (trace) of (statically predicted or profile predicted) long sequence of straight-line code • Trace Compaction • Squeeze trace into few VLIW instructions • Need bookkeeping code in case prediction is wrong • Compiler undoes bad guess (discards values in registers) • Subtle compiler bugs mean wrong answer vs. poorer performance; no hardware interlocks Chap. 4 - ILP 3

  50. Hardware Support For Parallelism • Software support of ILP is best when code is predictable at compile time. • But what if there’s no predictability? • Here we’ll talk about hardware techniques. These include: • Conditional or Predicated Instructions • Hardware Speculation 4.1 Compiler Techniques for Exposing ILP 4.3 Static Multiple Issue: VLIW 4.4 Advanced Compiler Support for ILP 4.5 Hardware Support for Exposing more Parallelism Chap. 4 - ILP 3

More Related