1 / 33

EENG 449bG/CPSC 439bG Computer Systems Lecture 3 MIPS Instruction Set & Intro to Pipelining

EENG 449bG/CPSC 439bG Computer Systems Lecture 3 MIPS Instruction Set & Intro to Pipelining. January 20, 2004 Prof. Andreas Savvides Spring 2004 http://www.eng.yale.edu/courses/eeng449bG. The MIPS Architecture. Features: GPRs with load-store

aggie
Télécharger la présentation

EENG 449bG/CPSC 439bG Computer Systems Lecture 3 MIPS Instruction Set & Intro to Pipelining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EENG 449bG/CPSC 439bG Computer SystemsLecture 3 MIPS Instruction Set&Intro to Pipelining January 20, 2004 Prof. Andreas Savvides Spring 2004 http://www.eng.yale.edu/courses/eeng449bG

  2. The MIPS Architecture Features: • GPRs with load-store • Displacement, Immediate and Register Indirect Addressing Modes • Data sizes: 8-, 16-, 32-, 64-bit integers and 64-bit floating point numbers • Simple instructions: load, store, add, sub, move register-register, shift • Compare equal, compare not equal, compare less, branch, jump call and return • Fixed instruction encoding for performance, variable instruction encoding for size • Provide at least 16 general purpose registers

  3. MIPS Architecture Features Registers: • 32 64-bit GPRs (R0, R1…R31) • Note: R0 is always 0 !!! • 32 64-bit Floating Point Registers (F0,F1… F31) Data types: • 8-bit bytes, 16-bit half words • 32-bit single precision and 64-bit double precision floating point instructions Addressing Modes: • Immediate (Add R4, R3 --- Regs[R4]<-Regs[R4]+3 • Displacement (Add R4, 100(R1) – Regs[R4]<-Mem[100+Regs[R1]] • Register indirect (place 0 in the displacement field) • E.g Add R4, 0(R1) • Absolute Addressing (place R0 as the base register) • E.g Add R4, 1000(R0)

  4. MIPS Instruction Format op – opcode (basic operation of the instruction) rs – first register operant rt – second register operant rd – register destination operant shamnt – shift amount funct – Function Example: LW t0, 1200($t1) 9 8 35 1200 binary 01001 01000 100011 0000 0100 1011 0000 Note: The numbers for these examples are form “Computer Organization & Design”, Chapter 3

  5. MIPS Instruction Format op – opcode (basic operation of the instruction) rs – first register operant rt – second register operant rd – register destination operant shamnt – shift amount funct – Function Example: Add $t0, $s2,$t0 18 8 8 0 32 0 binary 00000 10010 01000 01000 00000 100000 Note: The numbers for these examples are form “Computer Organization & Design”, Chapter 3

  6. MIPS Instruction Format op – opcode (basic operation of the instruction) rs – first register operant rt – second register operant rd – register destination operant shamnt – shift amount funct – Function Example: j 10000 10000 2 binary ? ? You fill it in!

  7. MIPS Operations Four broad classes supported: • Loads and stores (figure 2.28) • Different data sizes: LD, LW, LH, LB, LBU … • ALU Operations (figure 2.29) • Add, sub, and, or … • They are all register-register operations • Control Flow Instructions (figure 2.30) • Branches (conditional) and Jumps (unconditional) • Floating Point Operations

  8. Levels of Representation temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; High Level Language Program lw $t15, 0($t2) lw $t16, 4($t2) sw $t16,0($t2) sw $t15,4($t2) Compiler Assembly Language Program Assembler 0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 Machine Language Program Machine Interpretation Control Signal Specification ° °

  9. Instruction Fetch Instruction Decode Operand Fetch Execute Result Store Next Instruction Execution Cycle Obtain instruction from program storage Determine required actions and instruction size Locate and obtain operand data Compute result value or status Deposit results in storage for later use Determine successor instruction

  10. Adder 4 Address Inst ALU 5 Steps of MIPS Datapath Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back Next PC MUX Next SEQ PC Zero? RS1 Reg File MUX RS2 Memory Data Memory L M D RD MUX MUX Sign Extend Imm WB Data

  11. Announcements • Homework 1 is out • Chapter 1: Problems 1.2, 1.3, 1.17 • Chapter 2: Problems 2.5, 2.11, 2.12, 2.19 • Appendix A: Problems A.1, A.5, A.6, A.7, A.11 • Due Thursday, Feb 5, 2:00pm • Note the paper on DSP processors on the website • Reading for this week: Patterson and Hennessy Appendix A • This lecture we are covering A1 and A2, next lecture will cover the rest of the appendix • Need to form teams for projects • Select a topic • Signup for group appointments with me

  12. List of Possible Projects • Power saving schemes in embedded microprocessors • Embedded operating system enhancements and scheduling schemes for sensor interfaces • Available operating systems TinyOS, PALOS, uCOS-II • Time synchronization in sensor networks and its hardware implications • Efficient microcontroller interfaces and control mechanisms for articulated nodes • Network protocols and/or data memory management for sensor networks • I also encourage you to propose your own project

  13. Introduction to Pipelinening Pipelining – leverage parallelism in hardware by overlapping instruction execution

  14. Next Instruction NI NI NI NI NI IF IF IF IF IF D D D D D Instruction Fetch E E E E E W W W W W Decode & Operand Fetch Execute Store Results Fast, Pipelined Instruction Interpretation Instruction Address Instruction Register Time Operand Registers Result Registers Registers or Mem

  15. A B C D Sequential Laundry 6 PM Midnight 7 8 9 11 10 Time • Sequential laundry takes 6 hours for 4 loads • If they learned pipelining, how long would laundry take? 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e r

  16. 30 40 40 40 40 20 A B C D Pipelined LaundryStart work ASAP 6 PM Midnight 7 8 9 11 10 • Pipelined laundry takes 3.5 hours for 4 loads Time T a s k O r d e r

  17. 30 40 40 40 40 20 A B C D Pipelining Lessons 6 PM 7 8 9 • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Pipeline rate limited by slowest pipeline stage • Multiple tasks operating simultaneously • Potential speedup = Number pipe stages • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup Time T a s k O r d e r

  18. Instruction Pipelining • Execute billions of instructions, so throughput is what matters • except when? • What is desirable in instruction sets for pipelining? • Variable length instructions vs. all instructions same length? • Memory operands part of any operation vs. memory operands only in loads or stores? • Register operand many places in instruction format vs. registers located in same place?

  19. Requirements for Pipelining Goal: Start a new instruction at every cycle What are the hardware implications? • Two different tasks should not attempt to use the same datapath resource on the same clock cycle. • Instructions should not interfere with each other • Need to have separate data and instruction memories • Need increased memory bandwidth • A 5-stage pipeline operating at the same clock rate as pipelined version requires 5 times the bandwidth • Need to introduce pipeline registers • Register file used in two places in the ID and WB stages • Perform reads in the first half and writes in the second half.

  20. Pipeline Requirements… Register file Read in the first half, write in the second half cycle Need separate instruction and Data memories: Structural Hazard

  21. Add registers between pipeline stages • Prevent interference between • 2 instructions • Carry data from one stage to the • next • Edge triggered

  22. Pipelining Hazards Hazards: circumstances that would cause incorrect execution if next instruction where launched Structural Hazards:Attempting to use the same hardware to do two different things at the same time Data Hazards:Instruction depends on result of prior instruction still in the pipeline Control Hazards:Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps) Common Solution: “Stall” the pipeline, until the hazard is resolved by inserting one or more “bubbles” in the pipeline

  23. Data Hazards Occurs when the relative timing of instructions is altered because of pipelining Consider the following code: DADD R1, R2, R3 DSUB R4, R1, R5 AND R6, R1, R7 OR R8, R1, R9 XOR R10, R1, R11

  24. Data Hazard

  25. Data Hazards: Data Forwarding

  26. Data Hazards Requiring Stalls LD R1,0(R2) DSUB R4,R1,R5 AND R6,R1,R7 OR R8,R1,R9 HAVE to stall for 1 cycle…

  27. Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken • Execute successor instructions in sequence • “Squash” instructions in pipeline if branch actually taken • Advantage of late pipeline state update • 47% MIPS branches not taken on average • PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken • 53% MIPS branches taken on average • But haven’t calculated branch target address in MIPS • MIPS still incurs 1 cycle branch penalty • Other machines: branch target known before outcome

  28. Four Branch Hazard Alternatives #4: Delayed Branch • Define branch to take place AFTER a following instruction branch instructionsequential successor1 sequential successor2 ........ sequential successorn ........ branch target if taken • 1 slot delay allows proper decision and branch target address in 5 stage pipeline • MIPS uses this Branch delay of length n

  29. Delayed Branch • Where to get instructions to fill branch delay slot? • Before branch instruction • From the target address: only valuable when branch taken • From fall through: only valuable when branch not taken • Canceling branches allow more slots to be filled • Compiler effectiveness for single branch delay slot: • Fills about 60% of branch delay slots • About 80% of instructions executed in branch delay slots useful in computation • About 50% (60% x 80%) of slots usefully filled • Delayed Branch downside: 7-8 stage pipelines, multiple instructions issued per clock (superscalar)

  30. Pipelining Performance Issues Consider an unpipelined processor 1ns/instruction Frequency 4 cycles for ALU operations 40% 4 cycles for branches 20% 5 cycles for memory operations 40% Pipelining overhead 0.2ns For the unpipelined processor

  31. Speedup from Pipelining Now if we had a pipelined processor, we assume that each instruction takes 1 cycle BUT we also have overhead so instructions take 1ns + 0.2 ns = 1.2ns

  32. Considering the stall overhead

More Related