1 / 46

Designing MIPS Processor (Multi-Cycle) Presentation H

CSE 675.02: Introduction to Computer Architecture. Designing MIPS Processor (Multi-Cycle) Presentation H. Slides by Gojko Babi ć and Elsevier Publishing. Multicycle Approach. We will be reusing functional units ALU used to compute address and to increment PC

rstanton
Télécharger la présentation

Designing MIPS Processor (Multi-Cycle) Presentation H

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 675.02: Introduction to Computer Architecture Designing MIPS Processor(Multi-Cycle)Presentation H Slides by Gojko Babićand Elsevier Publishing

  2. Multicycle Approach • We will be reusing functional units • ALU used to compute address and to increment PC • Memory used for instruction and data • Our control signals will not be determined directly by instruction • e.g., what should the ALU do for a “subtract” instruction? • We’ll use a finite state machine for control

  3. Multicycle Approach • Break up the instructions into steps, each step takes a cycle • balance the amount of work to be done • restrict each cycle to use only one major functional unit • At the end of a cycle • store values for use in later cycles (easiest thing to do) • introduce additional “internal” registers

  4. Instructions from ISA perspective • Consider each instruction from perspective of ISA. • Example: • The add instruction changes a register. • Register specified by bits 15:11 of instruction. • Instruction specified by the PC. • New value is the sum (“op”) of two registers. • Registers specified by bits 25:21 and 20:16 of the instruction Reg[Memory[PC][15:11]] <= Reg[Memory[PC][25:21]] op Reg[Memory[PC][20:16]] • In order to accomplish this we must break up the instruction. (kind of like introducing variables when programming) Presentation H

  5. Breaking down an instruction • ISA definition of arithmetic:Reg[Memory[PC][15:11]] <= Reg[Memory[PC][25:21]] op Reg[Memory[PC][20:16]] • Could break down to: • IR <= Memory[PC] • A <= Reg[IR[25:21]] • B <= Reg[IR[20:16]] • ALUOut <= A op B • Reg[IR[20:16]] <= ALUOut • We forgot an important part of the definition of arithmetic! • PC <= PC + 4 Presentation H

  6. Idea behind multicycle approach • We define each instruction from the ISA perspective (do this!) • Break it down into steps following our rule that data flows through at most one major functional unit (e.g., balance work across steps) • Introduce new registers as needed (e.g, A, B, ALUOut, MDR, etc.) • Finally try and pack as much work into each step (avoid unnecessary cycles)while also trying to share steps where possible (minimizes control, helps to simplify solution) • Result: Our book’s multicycle Implementation! Presentation H

  7. Five Execution Steps • Instruction Fetch • Instruction Decode and Register Fetch • Execution, Memory Address Computation, or Branch Completion • Memory Access or R-type instruction completion • Write-back step INSTRUCTIONS TAKE FROM 3 - 5 CYCLES!

  8. Step 1: Instruction Fetch • Use PC to get instruction and put it in the Instruction Register. • Increment the PC by 4 and put the result back in the PC. • Can be described succinctly using RTL "Register-Transfer Language" IR <= Memory[PC]; //PC <= PC + 4; ALUOut <= PC + 4;Can we figure out the values of the control signals?What is the advantage of updating the PC now?

  9. Step 2: Instruction Decode and Register Fetch • Read registers rs and rt in case we need them • Compute the branch address in case the instruction is a branch • RTL: A <= Reg[IR[25:21]]; B <= Reg[IR[20:16]]; ALUOut <= PC + (sign-extend(IR[15:0]) << 2); • We aren't setting any control lines based on the instruction type (we are busy "decoding" it in our control logic)

  10. Step 3 (instruction dependent) • ALU is performing one of three functions, based on instruction type • Memory Reference: ALUOut <= A + sign-extend(IR[15:0]); • R-type: ALUOut <= A op B; • Branch: if (A==B) PC <= ALUOut;

  11. Step 4 (R-type or memory-access) • Loads and stores access memory MDR <= Memory[ALUOut]; or Memory[ALUOut] <= B; • R-type instructions finish Reg[IR[15:11]] <= ALUOut;The write actually takes place at the end of the cycle on the edge

  12. Write-back step • Reg[IR[20:16]] <= MDR; Which instruction needs this?

  13. Summary:

  14. Simple Questions • How many cycles will it take to execute this code? lw $t2, 0($t3) lw $t3, 4($t3) beq $t2, $t3, Label #assume not add $t5, $t2, $t3 sw $t5, 8($t3)Label: ... • What is going on during the 8th cycle of execution? • In what cycle does the actual addition of $t2 and $t3 takes place?

  15. Presentation H

  16. Review: finite state machines • Finite state machines: • a set of states and • next state function (determined by current state and the input) • output function (determined by current state and possibly input) • We’ll use a Moore machine (output based only on current state)

  17. Finite State Machines Introduction Presentation H

  18. Review: finite state machines • Example: B. 37 A friend would like you to build an “electronic eye” for use as a fake security device. The device consists of three lights lined up in a row, controlled by the outputs Left, Middle, and Right, which, if asserted, indicate that a light should be on. Only one light is on at a time, and the light “moves” from left to right and then from right to left, thus scaring away thieves who believe that the device is monitoring their activity. Draw the graphical representation for the finite state machine used to specify the electronic eye. Note that the rate of the eye’s movement will be controlled by the clock speed (which should not be too great) and that there are essentially no inputs.

  19. Finite State Machine Example: 3 ones… Draw the FSM… Truth table… Presentation H

  20. Hardware Implementation of FSM + = ? Presentation H

  21. Implementing the Control • Value of control signals is dependent upon: • what instruction is being executed • which step is being performed • Use the information we’ve accumulated to specify a finite state machine • specify the finite state machine graphically, or • use microprogramming • Implementation can be derived from specification

  22. Graphical Specification of FSM • Note: • don’t care if not mentioned • asserted if name only • otherwise exact value • How many state bits will we need?

  23. Step 1: Instruction Fetch IorD=0, MemRead, IRWrite • Use PC to get instruction and put it in the Instruction Register, i.e. IR  Memory[PC]; • Increment the PC by 4 and put the result back in the PC, i.e.PC  [PC] + 4; • Can we figure out the values of the control signals? • Here are rules for signals that are omitted: • If signal for mux is not stated, it is don’t care • If ALU signals are not stated, they are don’t care • If MemRead, MemWrite, RegWrite, IRWrite, PCWrite or PCWriteCond is not stated, it is unasserted, i.e. logical 0. ALUSrcA=0, ALUSrcB=01, ALUOp=00, PCSource=0, PCWrite Presentation H

  24. Step 2: Instruction Decode & Register Fetch • We aren't setting any control lines based on the instruction type (we are busy "decoding" it in our control logic) • Read registers rs and rt in case we need them: A  Reg[IR[25-21]];B  Reg[IR[20-16]]; Done automatically • Compute the branch address in case the instruction is a branch: ALUOut  PC + (sign-extend(IR[15-0]) << 2); ALUSrcA=0, ALUSrcB=11, ALUOp=00 Presentation H

  25. Step 3: Instruction Dependent • ALU is performing one of three functions, based on instruction type • Memory Reference (lw or sw): ALUOut  A + sign-extend(IR[15-0]); • R-type: ALUOut  A op B; • Branch on Equal: if (A==B) PC  ALUOut; ALUSrcA=1, ALUSrcB=10, ALUop=00 ALUSrcA=1, ALUSrcB=00, ALU0p=10 ALUSrcA=1, ALUSrcB=00, ALU0p=01 PCSource=01, PCWriteCond Note: beq instruction is done, thus this instruction requires 3 clock cycles to execute. Presentation H

  26. Steps 4 and 5: Instruction Dependent IorD=1, MemRead • Step 4: R-type and Memory Access • Loads and stores access memory MDR  Memory[ALUOut] (load); or Memory[ALUOut]  B (store); • R-type instructions finish Reg[IR[15-11]]  ALUOut; Register write actually takes place at the end of the cycle on the falling edge • Store and R-type instructions are done in 4 clock cycles • Step 5: Write back (load only) • Reg[IR[20-16]]  MDR IorD=1, MemWrite RegDst=1, MemToReg=0, RegWrite RegDst=0, MemToReg=1, RegWrite Presentation H

  27. Action for R-type Action for memory-reference Action for Action for Step name instructions instructions branches jumps Instruction fetch IR  Memory[PC] PC  PC + 4 Instruction A  Reg [IR[25-21]] B  Reg [IR[20-16]] decode/register fetch ALUOut  PC + (sign-extend (IR[15-0]) << 2) Execution, address ALUOut  A op B ALUOut  A + sign-extend if (A ==B) then PC  PC [31-28] II computation, branch/ (IR[15-0]) PC  ALUOut (IR[25-0]<<2) jump completion Memory access or R-type Reg [IR[15-11]]  Load: MDRMemory[ALUOut] completion ALUOut or Store: Memory [ALUOut]B Memory read completion Load: Reg[IR[20-16]]  MDR Summary of Instruction Executions Figure 5.50 Note: Jump instruction added: PCSource=10, PCWrite Presentation H

  28. Finite State Machine for Control • Implementation:

  29. PLA Implementation

  30. m n ROM Implementation • ROM = "Read Only Memory" • values of memory locations are fixed ahead of time • A ROM can be used to implement a truth table • if the address is m-bits, we can address 2m entries in the ROM. • our outputs are the bits of data that the address points to.m is the "height", and n is the "width" 0 0 0 0 0 1 1 0 0 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 1 1 0 0 1 1 0 1 1 1 0 1 1 1

  31. ROM Implementation • How many inputs are there? 6 bits for opcode, 4 bits for state = 10 address lines (i.e., 210 = 1024 different addresses) • How many outputs are there? 16 datapath-control outputs, 4 state bits = 20 outputs • ROM is 210 x 20 = 20K bits (and a rather unusual size) • Rather wasteful, since for lots of the entries, the outputs are the same — i.e., opcode is often ignored

  32. ROM vs PLA • Break up the table into two parts — 4 state bits tell you the 16 outputs, 24 x 16 bits of ROM — 10 bits tell you the 4 next state bits, 210 x 4 bits of ROM — Total: 4.3K bits of ROM • PLA is much smaller — can share product terms — only need entries that produce an active output — can take into account don't cares • Size is (#inputs ´ #product-terms) + (#outputs ´ #product-terms) For this example = (10x17)+(20x17) = 510 PLA cells • PLA cells usually about the size of a ROM cell (slightly bigger)

  33. Another Implementation Style • Complex instructions: the "next state" is often current state + 1

  34. Details

  35. Microprogramming • What are the “microinstructions” ?

  36. Microprogramming • A specification methodology • appropriate if hundreds of opcodes, modes, cycles, etc. • signals specified symbolically using microinstructions • Will two implementations of the same architecture have the same microcode? • What would a microassembler do?

  37. Microinstruction format

  38. Maximally vs. Minimally Encoded • No encoding: • 1 bit for each datapath operation • faster, requires more memory (logic) • used for Vax 780 — an astonishing 400K of memory! • Lots of encoding: • send the microinstructions through logic to get control signals • uses less memory, slower • Historical context of CISC: • Too much logic to put on a single chip with everything else • Use a ROM (or even RAM) to hold the microcode • It’s easy to add new instructions

  39. Microcode: Trade-offs • Distinction between specification and implementation is sometimes blurred • Specification Advantages: • Easy to design and write • Design architecture and microcode in parallel • Implementation (off-chip ROM) Advantages • Easy to change since values are in memory • Can emulate other architectures • Can make use of internal registers • Implementation Disadvantages, SLOWER now that: • Control is implemented on same chip as processor • ROM is no longer faster than RAM • No need to go back and make changes

  40. Historical Perspective • In the ‘60s and ‘70s microprogramming was very important for implementing machines • This led to more sophisticated ISAs and the VAX • In the ‘80s RISC processors based on pipelining became popular • Pipelining the microinstructions is also possible! • Implementations of IA-32 architecture processors since 486 use: • “hardwired control” for simpler instructions (few cycles, FSM control implemented using PLA or random logic) • “microcoded control” for more complex instructions (large numbers of cycles, central control store) • The IA-64 architecture uses a RISC-style ISA and can be implemented without a large central control store Presentation H

  41. Pentium 4 • Pipelining is important (last IA-32 without it was 80386 in 1985) • Pipelining is used for the simple instructions favored by compilers“Simply put, a high performance implementation needs to ensure that the simple instructions execute quickly, and that the burden of the complexities of the instruction set penalize the complex, less frequently used, instructions” Chapter 7 Chapter 6 Presentation H

  42. Pentium 4 • Somewhere in all that “control we must handle complex instructions • Processor executes simple microinstructions, 70 bits wide (hardwired) • 120 control lines for integer datapath (400 for floating point) • If an instruction requires more than 4 microinstructions to implement, control from microcode ROM (8000 microinstructions) • Its complicated! Presentation H

  43. Chapter 5 Summary • If we understand the instructions… We can build a simple processor! • If instructions take different amounts of time, multi-cycle is better • Datapath implemented using: • Combinational logic for arithmetic • State holding elements to remember bits • Control implemented using: • Combinational logic for single-cycle implementation • Finite state machine for multi-cycle implementation Presentation H

  44. MIPS Exception Processing • We are implementing processing of the following two exceptions: illegal op- code and integer overflow. • When any of the exceptions occurs, MIPS processor processes the exception in the following 3 steps: • Step 1. • EPC register gets a value equal to address of a faulty instruction. • Step 2. • PC  8000018016 • Cause register  a code of the exception • – illegal op-code, i.e. reserved or undefined op-code = 10 • – integer overflow = 12 • Step 3. • Processor is now running in Kernel mode. • Note: we are not implementing step 3. Presentation H

  45. Multi-Cycle Datapath for Exception Handling C a u s e W r i t e P C W r i t e C o n d I n t C a u s e P C W r i t e E P C W r i t e O u t p u t s I o r D P C S o u r c e A L U O p M e m R e a d A L U S r c B C o n t r o l Overflow M e m W r i t e A L U S r c A M e m t o R e g R e g W r i t e O p I R W r t e R e g D s t [ 5 – 0 ] 0 M 1 Add 2 zeros J u m p 2 6 2 8 u I n s t r u c t i o n [ 2 5 – 0 ] a d d r e s s [ 3 1 - 0 ] x 2 I n s t r u c t i o n 3 8 O 0 0 0 1 8 0 [ 3 1 - 2 6 ] ALU [ 3 1 - 2 8 ] 0 P C 0 I n s t r u c t i o n R e a d M M [ 2 5 – 2 1 ] r e g i s t e r 1 A d d r e s s u u x x R e a d A M e m o r y I n s t r u c t i o n R e a d Z e r o 1 d a t a 1 1 [ 2 0 – 1 6 ] r e g i s t e r 2 A L U M e m D a t a A L U A L U O u t 0 E P C R e g i s t e r s r e s u l t I n s t r u c t i o n W r i t e M R e a d B [ 1 5 – 0 ] r e g i s t e r u 0 d a t a 2 I n s t r u c t i o n x W r i t e [ 1 5 – 1 1 ] M I n s t r u c t i o n 4 1 W r i t e d a t a u 1 r e g i s t e r d a t a 2 x 0 10 0 I n s t r u c t i o n 3 M [ 1 5 – 0 ] M C a u s e u u x x 12 1 M e m o r y 1 1 6 3 2 d a t a A L U S h i f t S i g n r e g i s t e r c o n t r o l l e f t 2 e x t e n d I n s t r u c t i o n [ 5 – 0 ] Figure 5.39 with corrections in red Presentation H

  46. I n s t r u c t i o n d e c o d e / I n s t r u c t i o n f e t c h R e g i s t e r f e t c h 0 1 M e m R e a d A L U S r c A = 0 I o r D = 0 A L U S r c A = 0 I R W r i t e A L U S r c B = 1 1 S t a r t A L U S r c B = 0 1 A L U O p = 0 0 A L U O p = 0 0 P C W r i t e P C S o u r c e = 0 0 ( O ) ) ' p e Q p y = t ) - E ' R J o B = ' ' t p h = O = ) ( ' e W p S r ' ) p = O M e m o r y a d d r e s s B r a n c h J u m p p ( O O ( r o ( E x e c u t i o n c o m p u t a t i o n c o m p l e t i o n ) ' c o m p l e t i o n W L ' = p O 2 6 8 9 ( A L U S r c A = 1 A L U S r c A = 1 A L U S r c A = 1 ALUOp=01 A L U S r c B = 0 0 A L U S r c B = 1 0 A L U S r c B = 0 0 A L U O p = 0 1 P C W r i t e A L U O p = 0 0 A L U O p = 1 0 P C W r i t e C o n d P C S o u r c e = 1 0 P C S o u r c e = 0 1 ALUSrcA=0 ALUSrcB=01 ( O ) p ' = W ' S L ' W ' = ) p M e m o r y M e m o r y O ( a c c e s s a c c e s s R - t y p e c o m p l e t i o n I n t C a u s e = 0 3 5 7 1 1 1 0 C a u s e W r i t e I n t C a u s e = 1 A A L U S r c A = 0 R e g D s t = 1 M e m R e a d M e m W r i t e C a u s e W r i t e O v e r f l o w A L U S r c B = 0 1 R e g W r i t e I o r D = 1 I o r D = 1 L U S r c A = 0 A L U O p = 0 1 M e m t o R e g = 0 E P C W r i t e A L U S r c B = 0 1 P C W r i t e A L U O p = 0 1 P C S o u r c e = 1 1 E P C W r i t e P C W r i t e W r i t e - b a c k s t e p P C S o u r c e = 1 1 4 O v e r f l o w R e g W r i t e M e m t o R e g = 1 R e g D s t = 0 FSM Graph with Exception Handling Figure 5.40 with additions in red Presentation H

More Related