1 / 44

Overview of Computer Architecture

Overview of Computer Architecture. Guest Lecture ECE 153a, Fall, 2001. Computer Architecture Is ….

phuc
Télécharger la présentation

Overview of Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Computer Architecture Guest Lecture ECE 153a, Fall, 2001

  2. Computer Architecture Is … the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. Amdahl, Blaaw, and Brooks, 1964 SOFTWARE

  3. What are “Machine Structures”? Application (Netscape) Operating Compiler System (Windows 98) • Coordination of many levels of abstraction Software Assembler Instruction Set Architecture Hardware Processor Memory I/O system Datapath & Control Digital Design Circuit Design transistors

  4. Levels of Representation temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; High Level Language Program (e.g., C) lw $to, 0($2) lw $t1, 4($2) sw $t1, 0($2) sw $t0, 4($2) Compiler Assembly Language Program (e.g.,MIPS) Assembler Machine Language Program (MIPS) 0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 Machine Interpretation Control Signal Specification ° °

  5. Anatomy: 5 components of any Computer Personal Computer Keyboard, Mouse Computer Processor (active) Memory (passive) (where programs, data live when running) Devices Disk(where programs, data live when not running) Input Control (“brain”) Datapath (“brawn”) Output Display, Printer

  6. A "Typical" RISC • 32-bit fixed format instruction (3 formats) • 32 32-bit GPR (R0 contains zero, DP take pair) • 3-address, reg-reg arithmetic instruction • Single address mode for load/store: base + displacement • no indirection • Simple branch conditions • Delayed branch see: SPARC, MIPS, HP PA-Risc, DEC Alpha, IBM PowerPC, CDC 6600, CDC 7600, Cray-1, Cray-2, Cray-3

  7. Example: MIPS (­ DLX) Register-Register 6 5 11 10 31 26 25 21 20 16 15 0 Op Rs1 Rs2 Rd Opx Register-Immediate 31 26 25 21 20 16 15 0 immediate Op Rs1 Rd Branch 31 26 25 21 20 16 15 0 immediate Op Rs1 Rs2/Opx Jump / Call 31 26 25 0 target Op

  8. Three Key Subjects • Pipeline • How data computation can be done faster? • Pipeline, hazards, scheduling, prediction, super scalar, etc. • Cache • How data access can be done faster? • L1, L2, L3 caches, pipelined requests, parallel processing, etc. • Virtual Memory • How data communication can be done faster? • Virtual memory, paging, bus control and protocol, etc.

  9. A B C D Pipelining: Its Natural! • Laundry Example • Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold • Washer takes 30 minutes • Dryer takes 40 minutes • “Folder” takes 20 minutes

  10. A B C D Sequential Laundry 6 PM Midnight 7 8 9 11 10 Time • Sequential laundry takes 6 hours for 4 loads • If they learned pipelining, how long would laundry take? 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e r

  11. 30 40 40 40 40 20 A B C D Pipelined LaundryStart work ASAP 6 PM Midnight 7 8 9 11 10 Time • Pipelined laundry takes 3.5 hours for 4 loads T a s k O r d e r

  12. 30 40 40 40 40 20 A B C D Pipelining Lessons 6 PM 7 8 9 • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Pipeline rate limited by slowest pipeline stage • Multiple tasks operating simultaneously • Potential speedup = Number pipe stages • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup Time T a s k O r d e r

  13. 5 Steps of DLX Datapath Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back IR L M D

  14. Pipelined DLX Datapath Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc. Write Back Memory Access • Data stationary control • local decode for each instruction phase / pipeline stage

  15. Im Dm Reg Reg ALU Im Dm Reg Reg ALU Im Dm Reg Reg ALU Im Dm Reg Reg ALU Im Dm Reg Reg ALU Why Pipeline? Because the resources are there! Time (clock cycles) I n s t r. O r d e r Inst 0 Inst 1 Inst 2 Inst 3 Inst 4

  16. Its Not That Easy for Computers • Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle • Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away) • Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock) • Control hazards: Pipelining of branches & other instructionsstall the pipeline until the hazardbubbles” in the pipeline

  17. Mem ALU Mem ALU ALU Mem Control Hazard • Stall: wait until decision is clear • Its possible to move up decision to 2nd stage by adding hardware to check registers as being read • Impact: 2 clock cycles per branch instruction => slow I n s t r. O r d e r Time (clock cycles) Mem Reg Reg Add Mem Reg Reg Beq Load Mem Reg Reg

  18. Im ALU Im ALU Im Dm Reg Reg ALU Data Hazard on r1: • Dependencies backwards in time are hazards Time (clock cycles) IF ID/RF EX MEM WB add r1,r2,r3 Reg Reg ALU Im Dm I n s t r. O r d e r sub r4,r1,r3 Dm Reg Reg Dm Reg Reg and r6,r1,r7 Im Dm Reg Reg or r8,r1,r9 ALU xor r10,r1,r11

  19. Pipeline Hazards Again I-Fet ch DCD MemOpFetch OpFetch Exec Store IFetch DCD ° ° ° Structural Hazard I-Fet ch DCD OpFetch Jump Control Hazard IFetch DCD ° ° ° IF DCD EX Mem WB RAW (read after write) Data Hazard IF DCD EX Mem WB WAW Data Hazard (write after write) IF DCD EX Mem WB IF DCD OF Ex Mem IF DCD OF Ex RS WAR Data Hazard (write after read)

  20. General Solutions • Forwarding • Forward data to the requested unit(s) as soon as they are available • Don’t wait until they are written into the register file • Stalling • Wait until things become clear • Sacrifice performance for correctness • Guessing • While waiting, why not make a guess and just do it • Help to improve performance, but no guaranty

  21. Issuing Multiple Instructions/Cycle • Superscalar DLX: 2 instructions, 1 FP & 1 anything else – Fetch 64-bits/clock cycle; Int on left, FP on right – Can only issue 2nd instruction if 1st instruction issues – More ports for FP registers to do FP load & FP op in a pair Type Pipe Stages Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB Int. instruction IF ID EX MEM WB FP instruction IF ID EX MEM WB • 1 cycle load delay expands to 3 instructions in SS • instruction in right half can’t use it, nor instructions in next slot

  22. Software Scheduling to Avoid Load Hazards Try producing fast code for a = b + c; d = e – f; assuming a, b, c, d ,e, and f in memory. Slow code: LW Rb,b LW Rc,c ADD Ra,Rb,Rc SW a,Ra LW Re,e LW Rf,f SUB Rd,Re,Rf SW d,Rd Fast code: LW Rb,b LW Rc,c LW Re,e ADD Ra,Rb,Rc LW Rf,f SW a,Ra SUB Rd,Re,Rf SW d,Rd

  23. Recap: Who Cares About the Memory Hierarchy? Processor-DRAM Memory Gap (latency) µProc 60%/yr. (2X/1.5yr) 1000 CPU “Moore’s Law” 100 Processor-Memory Performance Gap:(grows 50% / year) Performance 10 DRAM 9%/yr. (2X/10 yrs) DRAM 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time

  24. Levels of the Memory Hierarchy Upper Level Capacity Access Time Cost Staging Xfer Unit faster CPU Registers 100s Bytes <10s ns Registers prog./compiler 1-8 bytes Instr. Operands Cache K Bytes 10-100 ns 1-0.1 cents/bit Cache cache cntl 8-128 bytes Blocks Main Memory M Bytes 200ns- 500ns $.0001-.00001 cents /bit Memory OS 512-4K bytes Pages Disk G Bytes, 10 ms (10,000,000 ns) 10 - 10 cents/bit Disk -6 -5 user/operator Mbytes Files Larger Tape infinite sec-min 10 Tape Lower Level -8

  25. The Principle of Locality • The Principle of Locality: • Program access a relatively small portion of the address space at any instant of time. • Two Different Types of Locality: • Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) • Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) • Last 15 years, HW relied on localilty for speed

  26. Memory Hierarchy: Terminology • Hit: data appears in some block in the upper level (example: Block X) • Hit Rate: the fraction of memory access found in the upper level • Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: data needs to be retrieve from a block in the lower level (Block Y) • Miss Rate = 1 - (Hit Rate) • Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor • Hit Time << Miss Penalty (500 instructions on 21264!) Lower Level Memory Upper Level Memory To Processor Blk X From Processor Blk Y

  27. Cache Measures • Hit rate: fraction found in that level • So high that usually talk about Miss rate • Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory • Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) • Miss penalty: time to replace a block from lower level, including time to replace in CPU • access time: time to lower level = f(latency to lower level) • transfer time: time to transfer block =f(BW between upper & lower levels)

  28. 1 KB Direct Mapped Cache, 32B blocks • For a 2 ** N byte cache: • The uppermost (32 - N) bits are always the Cache Tag • The lowest M bits are the Byte Select (Block Size = 2 ** M) 31 9 4 0 Cache Tag Example: 0x50 Cache Index Byte Select Ex: 0x01 Ex: 0x00 Stored as part of the cache “state” Valid Bit Cache Tag Cache Data : Byte 31 Byte 1 Byte 0 0 : 0x50 Byte 63 Byte 33 Byte 32 1 2 3 : : : : Byte 1023 Byte 992 31

  29. Cache Data Cache Tag Valid Cache Block 0 : : : Compare Two-way Set Associative Cache • N-way set associative: N entries for each Cache Index • N direct mapped caches operates in parallel (N typically 2 to 4) • Example: Two-way set associative cache • Cache Index selects a “set” from the cache • The two tags in the set are compared in parallel • Data is selected based on the tag result Cache Index Valid Cache Tag Cache Data Cache Block 0 : : : Adr Tag Compare 1 0 Mux Sel1 Sel0 OR Cache Block Hit

  30. 4 Questions for Memory Hierarchy • Q1: Where can a block be placed in the upper level? (Block placement) • Q2: How is a block found if it is in the upper level? (Block identification) • Q3: Which block should be replaced on a miss? (Block replacement) • Q4: What happens on a write? (Write strategy)

  31. Q1: Where can a block be placed in the upper level? • Block 12 placed in 8 block cache: • Fully associative, direct mapped, 2-way set associative • S.A. Mapping = Block Number Modulo Number Sets Memory

  32. Q2: How is a block found if it is in the upper level? • Tag on each block • No need to check index or block offset • Increasing associativity shrinks index, expands tag

  33. Q3: Which block should be replaced on a miss? • Easy for Direct Mapped • Set Associative or Fully Associative: • Random • LRU (Least Recently Used) Associativity: 2-way 4-way 8-way Size LRU Random LRU Random LRU Random 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

  34. Q4: What happens on a write? • Write through—The information is written to both the block in the cache and to the block in the lower-level memory. • Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. • is block clean or dirty? • Pros and Cons of each? • WT: read misses cannot result in writes • WB: no repeated writes to same location • WT always combined with write buffers so that don’t wait for lower level memory

  35. Quicksort vs. Radix as vary number keys: Instrs & Time Radix sort Time Quick sort Instructions Set size in keys

  36. Quicksort vs. Radix as vary number keys: Cache misses Radix sort Cache misses Quick sort Set size in keys What is proper approach to fast algorithms?

  37. A Modern Memory Hierarchy • By taking advantage of the principle of locality: • Present the user with as much memory as is available in the cheapest technology. • Provide access at the speed offered by the fastest technology. Processor Control Tertiary Storage (Disk/Tape) Secondary Storage (Disk) Second Level Cache (SRAM) Main Memory (DRAM) On-Chip Cache Datapath Registers Speed (ns): 1s 10s 100s 10,000,000s (10s ms) 10,000,000,000s (10s sec) Size (bytes): 100s Ks Ms Gs Ts

  38. size of information blocks that are transferred from secondary to main storage (M) block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy which region of M is to hold the new block --> placement policy missing item fetched from secondary memory only on the occurrence of a fault --> demand load policy Basic Issues in VM System Design disk mem cache reg pages frame Paging Organization virtual and physical address space partitioned into blocks of equal size page frames pages

  39. V = {0, 1, . . . , n - 1} virtual address space M = {0, 1, . . . , m - 1} physical address space MAP: V --> M U {0} address mapping function Address Map n > m MAP(a) = a' if data at virtual address a is present in physical address a' and a' in M = 0 if data at virtual address a is not present in M a missing item fault Name Space V fault handler Processor 0 Secondary Memory Addr Trans Mechanism Main Memory a a' physical address OS performs this transfer

  40. V.A. P.A. Paging Organization unit of mapping 0 frame 0 1K 0 Addr Trans MAP 1K page 0 1024 1 1K 1024 1 1K also unit of transfer from virtual to physical memory 7 1K 7168 Physical Memory 31 1K 31744 Virtual Memory Address Mapping 10 VA page no. disp Page Table Page Table Base Reg Access Rights actually, concatenation is more likely V + PA index into page table table located in physical memory physical memory address

  41. miss VA PA Trans- lation Cache Main Memory Virtual Address and a Cache CPU hit data It takes an extra memory access to translate VA to PA This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible ASIDE: Why access cache with PA at all? VA caches have a problem! synonym / alias problem: two different virtual addresses map to same physical address => two different cache entries holding data for the same physical address! for update: must update all cache entries with same physical address or memory becomes inconsistent determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits; or software enforced alias boundary: same lsb of VA &PA > cache size

  42. TLBs A way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB Virtual Address Physical Address Dirty Ref Valid Access Really just a cache on the page table mappings TLB access time comparable to cache access time (much less than main memory access time)

  43. Translation Look-Aside Buffers Just like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped TLBs are usually small, typically not more than 128 - 256 entries even on high end machines. This permits fully associative lookup on these machines. Most mid-range machines use small n-way set associative organizations. hit miss VA PA TLB Lookup Cache Main Memory CPU Translation with a TLB miss hit Trans- lation data t 1/2 t 20 t

  44. Overlapped Cache & TLB Access Cache TLB index assoc lookup 1 K 32 4 bytes 10 2 00 Hit/ Miss PA Data PA Hit/ Miss 12 20 page # disp = IF cache hit AND (cache tag = PA) then deliver data to CPU ELSE IF [cache miss OR (cache tag = PA)] and TLB hit THEN access memory with the PA from the TLB ELSE do standard VA translation

More Related