1 / 28

MAMAS – Computer Architecture 234367 Lecture 9 – Out Of Order (OOO)

MAMAS – Computer Architecture 234367 Lecture 9 – Out Of Order (OOO). Dr. Avi Mendelson. Some of the slides were taken from: (1) Lihu Rapoport (2) Randi Katz and (3) Petterson. Agenda. Introduction Data flow execution Out of order execution – principles Pentium-(II,III and 6) example.

laurie
Télécharger la présentation

MAMAS – Computer Architecture 234367 Lecture 9 – Out Of Order (OOO)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MAMAS – Computer Architecture 234367Lecture 9 – Out Of Order (OOO) Dr. Avi Mendelson Some of the slides were taken from: (1) Lihu Rapoport (2) Randi Katz and (3) Petterson

  2. Agenda • Introduction • Data flow execution • Out of order execution – principles • Pentium-(II,III and 6) example

  3. Introduction • The goal is to increase the performance of a system. • In order to achieve that, so far we discussed • Adding pipe stages in order to increase the clock rate • Increase the potential parallelism within the pipe by • Fetching decoding and executing several instructions in parallel • Are there any other options?

  4. Data flow vs. conventional computer ? • In theory “data flow machines” has the best performance • View the program as a parallel operations wait to be executed (will be demonstrate next slide) • Execute instruction as soon as its inputs are ready • So, why computers are Van-Neumann based and not Data-Flow based • Hard to debug • Hard to write Data-Flow programs (need special programming language in order to be efficient)

  5. Data Flow Graph 1 3 4 2 5 6 Data flow execution – a different approach for high performance computers • Data flow execution is an alternative for Van-Neumann execution. Here, the instructions are executed in the order of their input dependencies and not in the order they appears in the program • Example: assume that we have as many execution units as we need: (1) r1 r4 / r7 (2) r8r1 + r2 (3) r5 r5 + 1 (4) r6 r6 - r3 (5) r4 r5 + r6 (6) r7  r8 * r4 We could execute it in 3 cycles

  6. Data flow execution - cont • Can we build a machine that will execute the “data flow graph”? • In the early 70th several machines were built to work according to the data-flow graph. They were called “data flow machines”. They were vanished due to the reasons we mentioned before. • Solution: Let the user think he/she are using Van-Neumann machine, and let the system work in “data-flow mode”

  7. Instruction pool Fetch & Decode Retire (commit) In-order In-order OOOE - General Scheme Most of the modern computers are using OOO execution. Most of them are doing the fetching and the retirement IN-ORDER, but it executes in OUT_OF_ORDER Execute Out-of-order

  8. Out Of Order Execution Basic idea: • The fetch is done in the program order (in-order) and fast enough in order to “fill-out” a window of instructions. • Out of the instruction window, the system forms a data flow graph and looks for instructions which are ready to be executed: • All the data the instructions are depended on, are ready • Resources are available. • As soon as the instruction is execution it needs to signal to all the instructions which are depend on it that it generate new input. • The instructions are commit in “program’s order” to preserve the “user view” • Advantages: • Help exploit Instruction Level Parallelism (ILP) • Help cover latencies (e.g., cache miss, divide)

  9. How to convert “In-order” instruction flow into “data flow” • The problems: • Data Flow has only RAW dependencies, while OOOE has also WAR and WAW dependencies • How to guarantee the in-order complition. • The Solutions: • Register Renaming (based on “Tomuselo algorithm”) solves the WAR and WAW dependencies • We need to “enumerate” the instructions at decode time (in order) so we know in what order to retire them

  10. Register Renaming • Hold a pool of physical registers. • Architectural registers are mapped into physical registers • When an instruction writes to an architectural register • A free physical register is allocated from the pool • The physical register points to the architectural register • The instruction writes the value to the physical register • When an instruction reads from an architectural register • reads the data from the latest instruction which writes to the same architectural register, and precedes the current instruction. • If no such instruction exists, read directly from the architectural register. • When an instruction commits • Moves the value from the physical register to the architectural register it points.

  11. WAW WAW WAR OOOE with Register Renaming: Example Before renaming After renaming (1) r1 mem1 t1  mem1 (2) r2  r2 + r1 t2  r2 + t1 (3) r1 mem2 t3  mem2 (4) r3  r3 + r1 t4  r3 + t3 (5) r1 mem3 t5  mem3 (6) r4 r5 + r1 t6  r5 + t5 (7) r5 2 t7  2 (8) r6  r5 + 2 t8  t7 + 2 After renaming all the false dependencies (WAW and WAR) were removed

  12. Executing Beyond Branches • The scheme we saw so far does not search beyond a branch • Limited to the parallelism within a basic-block • A basic-block is ~5 instruction long (1)r1 r4 / r7 (2) r2 r2 + r1 (3) r3r2 - 5 (4) beq r3,0,300 If the beq is predicted NT, (5) r8  r8 + 1 Inst 5 can be spec executed • We would like to look beyond branches • But what if we execute an instruction beyond a branch and then it turns out that we predicted the wrong path ? Solution: Speculative Execution

  13. Speculative Execution • Execution of instructions from a predicted (yet unsure) path • Eventually, path may turn wrong • Implementation: • Hold a pool of all “not yet executed” instructions • Fetch instructions into the pool from a predicted path • Instructions for which all operands are “ready” can be executed • An instruction may change the processor state (commit) only when it is safe • An instruction commits only when all previous (in-order) instructions had committed  Instructions commit in-order • Instructions which follow a branch commit only after the branch commits • If a predicted branch is wrong all the instructions which follow it are flushed • Register Renaming helps speculative execution • Renamed registers are kept until speculation is verified to be correct

  14. The magic of the modern X86 architectures (Intel, AMD, etc.) • The user view of the X86 machine is as a CISC architecture. • The machine supports this view by keeping the in-order parts as close as possible to the X86 view. • While moving from the In-order part (front-end) to the OOO part (execution), the hardware translates each X86 instruction into a set of uop operations, which are the internal machine operations. These operations are RISC like (load-store based). • During this translation, the hardware performs the register renaming. So, during the execution time it uses internal registers and not the X86 ones. The number of these registers can be changed from one generation to another. • While moving back from the OOO part (execution) to the In-Order part (commit), the hardware translates the registers back to X86, in order to keep for the user a coherent picture.

  15. Bus Interface Unit Instructioncache Datacache IFUInstr. Fetch Load/Store Operations MOB IDInstr. Decodeand rename Arithmetic Operations RS RAT ROB Retire (commit) Logic OOOE Architecture: based on Pentium-II • In Order Front-end • Fetch from instruction cache, base on Branch prediction. • Decode and rename: • Translate to Uops • Use the RAT table for renaming • Put ALL instructions in ROB • Put all “arithmetic instructions” in the RS queue • Put all Load/Store instructions in MOB • Out-Of-Order execution • Do in Parallel: • Load and store operations are executed based on MOB information • Arithmetic operations are executed based on RS information. • All results are written back to ROB, while RS and MOB “still” values they need • In Order completion (retirement) • The retire logic (commit logic) moves instructions out of the ROB and updates the architectural registers Write back bus

  16. Re-order Buffer (ROB) • Mechanism for keeping the in-order view of the user. • Basic ROB functions • Provide large physical register space for register renaming • Keeps intermediate results, some of them may not be commit if the branch prediction was wrong (we will discuss this mechanism later on) • Keeps information on what is the “Real Register” the commit need to update

  17. Reservation station (RS) • Pool of all “not yet executed” uops • Holds both the uop attributes as well as the values of the input data • For each operand, it keeps indication if it is ready • Operand that need to be retrieved from the RRF is always ready • Operand that waits for another Uop to generate its value, will “lesson” to the WB bus. When the value appears on the bus (the value is always associated with the ROB number it needs to update), all RS entries how need to consume this value, “still” it from the bus and mark the input as ready (this is done in parallel to the ROB update. • Uops whose all operands are ready can be dispatched for execution • Dispatcher chooses which of the ready uops to execute next. If can also do “forwarding”; i.e., schedule the instruction at the same cycle the information is written to the RS entry. • As soon as Uop completes its execution, it is deleted from the RS. • If the RS is full, it stalls the decoder

  18. Memory Order Buffer (MOB) • Goal – Manipulates the Load and Store operations. If possible, it allows out-of-order among memory operations • Structure similar in concept to ROB • Every memory uop allocates new entry in-order. • Address need to be updated when known • Problem- Memory dependencies cannot be fully resolved statically (memory disambiguation) • store r1,a; load r2,b  can advance load before store • store r1,[r3]; load r2,b  load should wait till r3 is known • In most of the modern processors, Loads may pass loads/stores but Stores must be execute in order (among stores). • For simplicity, this course assumes that all MOB operations are executed in order.

  19. An example of OOO Execution

  20. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire

  21. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 R1 <- R1+R0 R2 <- R3 LD R1,X ◄

  22. RAT I5 R0 R1 R2 R3 I4 Instruction Q ROB MOB RS Execute Retire RB0 R1 <- R1+R0 R2 <- R3 ◄ LD R1,X LD R1,X M0 LD RB0,X Takes 3 cycles

  23. RAT I5 R0 R1 R2 R3 I4 Instruction Q ROB MOB RS Execute Retire RB0 R1 <- R1+R0 ◄ RB1 R2 <- R3 LD R1,X LD R1,X M0 RB1 <- R3 R2 <- R3 RS0 LD RB0,X

  24. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 ◄ RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X M0 RB1 <- R3 R2 <- R3 RS0 LD RB0,X RB2 <- RB0+R0 R1 <- R1+R0 RS1 RB1 <- R3

  25. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 ◄ RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X M0 R2 <- R3 O.K LD RB0,X RB2 <- RB0+R0 Got the value now R1 <- R1+R0 RS1 I4 Cannot execute since the data is not ready yet

  26. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire ◄ I5 I4 RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X OK R2 <- R3 OK R1 <- RB0+R0 RS1 RB2 <- RB0+R0 I4 I4 RS2 I5 I5 RS3 RB2 <- RB0+R0

  27. RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 RB3 R1 <- R1+R0 R2 <- R3 LD R1,X I6 R1 <- RB1+R0 OK I4 I4 rs2 I5 rs3 I5 I6 rs0 I4 I5 R2 <- R3 LD R1,X

  28. Backup

More Related