1 / 75

Qualifying Exam Review

Qualifying Exam Review. CSCE 513 Computer Architecture. Topics Old questions Major Equations Other Questions Comments on what was missing. September 17, 2013. Syllabus for Architecture Exam.

cissy
Télécharger la présentation

Qualifying Exam Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Qualifying Exam Review CSCE 513 Computer Architecture • Topics • Old questions • Major Equations • Other Questions • Comments on what was missing September 17, 2013

  2. Syllabus for Architecture Exam • Architecture (CSCE 513): Computer Architecture: A Quantitative Approach, 5th ed. Hennessey and Patterson, Morgan Kaufman, Chapters 1-5, 8.1-8.5, Appendix A,B,C Fundamentals of Computer Design • Instruction sets • Instruction Level parallelism • Loop unrolling and static techniques • Dynamic Techniques: Tomasulo’s, Reorder Buffer • Memory Hierarchy design • Thread Level Parallelism • Warehouse-scale computers • For further information see Dr. Matthews's CSCE 513 website (http://www.cse.sc.edu/~matthews/csce513.html) • http://www.cse.sc.edu/~fenner/qexam/index.html

  3. Amdahl’s with Fractional Use Factor • Example:Suppose we are considering an enhancement to a web server. The enhanced CPU is 10 times faster on computation but the same speed on I/O. Suppose also that 60% of the time is waiting on I/O Ref. CAAQA

  4. Appendix A – Instructions set Architecture (ISA) • RISC vs CISC • Uniform instruction length and format • Address modes • Operand selection • Instruction Frequency • Integer programs – Loads 26%, Stores 10% • Floating point - Loads 12%, Stores 2%

  5. Appendix B – Basic Memory Hierarchy Design • Caches – designed to take advantage of locality • Spatial locality • Temporal locality • Organization: address = tag / set-index / block-offset • b = log2B //B Block size, b bits in block-offset • L = CacheSize / B // L – number of lines in cache • S = L / Associativity // S – number of sets • s = log2S // s – size of set-index field • 4 Organizational Decisions • Block Placement • Block identification • Block replacement • How do we handle writes?

  6. Cache Example • Physical addresses are 13 bits wide. • The cache is 2-way set associative, with a 4 byte line size and 16 total lines. • Physical address: 0E34

  7. Average Memory Access Time (AMAT) – weighted average

  8. Figure 2.1 typical memory hierarchy

  9. 6 Basic Cache Optimizations • 3 Categories: • Reducing miss rate: • Larger block size • Larger cache size • Higher associativity • Reducing miss penalty • multilevel caches • Give reads priority over writes • Reducing Hit Time • Overlaying TLB address translation with cache access

  10. 2.2 - 10 Advanced Cache Optimizations • Five Categories • Reducing Hit Time- • Small and simple first-level caches and way-prediction. Both techniques also generally decrease power consumption. • Increasing cache bandwidth— • Pipelined caches, • multibanked caches, and • nonblocking caches. These techniques have varying impacts on power consumption. • Reducing the miss penalty— • Critical word first and • merging write buffers. These optimizations have little impact on power. • Reducing the miss rate— Compiler optimizations • Reducing the miss penalty or miss rate via parallelism— • Hardware prefetching and compiler prefetching.

  11. Classical 5-stage pipeline

  12. Data Hazards - forwarding • Forwarding options • None • Through registers • Full • Fig C.27 – paths • Fig C.26 – • Load/Use Hazards

  13. Handling branches

  14. Tomasulo’s

  15. Reorder Buffer (ROB) • Out of order execution • In-order commit

  16. Static Tech. – Loop Unrolling, VLIW • Original Loop • Stalls • Unrolled • Scheduled

  17. Thread Level Parallelism • Cache coherency

  18. Snoopy Cache vs Directory Based

  19. Spring 2013 – Question 1

  20. Spring 2010

  21. Ques 2 Spr 2010 • You are the lead architect of Intel's Haswell micro-architecture team. You're currently facing several design decisions involving the memory system for the upcoming line of processors. The processor will run at 3 GHz and have an average benchmark CPI of 0.2 excluding memory accesses. The only instructions that read or write data from 2memory are loads (20% of all instructions) and stores (5% of all instructions). The base memory system has the following characteristics:

  22. Ques 2 Spr 2010 continued

  23. Spr 2010 Question 3

  24. Fall 2009 • 1. It’s 1997 and you’re a graduate student at Stanford named Larry Page. You’re trying to build a new Internet search engine and your strategy is to optimize its performance by ensuring that during a search, neither the CPU nor its disk array is idle. • The search database is logically divided into 100 MB contiguous blocks. After the first block is read, the engine reads subsequent blocks while using the CPU to search the previously read block. It takes 100 ms for the CPU to search each block. • You decide to use disks that each rotates at 170 revolutions/sec, has an average seek time of 8 ms, has a transfer rate of 50 MB/sec, and has a controller overhead of 2 ms. • How many disks do you need in your disk array?

  25. Determine the total branch penalty for a branch-target buffer assuming the penalty cycles for individual mispredictions shown below: • Make the following assumptions about the prediction accuracy and hit rate: • • Prediction accuracy is 90% (for instructions in the buffer) • • Hit rate in the buffer is 90% (for branches predicted taken)

  26. You are building a system around a processor with in-order execution that runs at 4 GHz and has a CPI of 0.7 excluding memory accesses. The only instructions that read or write data from memory are loads (20% of all instructions) and stores (5% of all instructions). • The memory system for this computer is composed of a split L1 cache that imposes no penalty on hits. Both the I-cache and D-cache are direct-mapped and hold 64 KB each. • The I-cache has a 1% miss rate and 64-byte blocks and the D-cache is write-through with a 7% miss rate and 16-byte blocks. There is a write buffer on the D-cache that eliminates stalls for 95% of all writes.

  27. 3. continued • The 12 MB write-back, unified L2 cache has 64-byte blocks and an access time of 15 ns. It is connected to the L1 cache by a 128-bit data bus that runs at 266 MHz and can transfer one 128-bit word per bus cycle. Of all memory references sent to the L2 cache in this system, 80% are satisfied without going to main memory. Also, 50% of all blocks replaced are dirty. • The 128-bit-wide main memory has an access latency of 30 ns, after which any number of bus words may be transferred at the rate of one per cycle on the 128-bit-wide 133 MHz main memory bus.

  28. Spring 2009 Architecture 1 • Consider the following three hypothetical processors, which we characterize with a SPEC benchmark: • (a) A simple MIPS two-issue static pipe running at a clock rate of 2 GHz and achieving a pipeline CPI of 0.6. This processor has a cache system that yields 0.0025 misses per instruction on average. • (b) A deeply pipelined version of the two-issue MIPS processor with slightly smaller caches and a 2.5 GHz clock rate. The pipeline CPI of the processor is 0.8, and the smaller caches yield 0.0055 misses per instruction on average. • (c) A speculative superscalar with a 64-entry window but achieves an average issue rate of 3.5. This processor has the smallest caches, which lead to 0.01 misses per instruction, but it hides 25% of the miss penalty on every miss by dynamic scheduling. This processor has a 1.5 GHz clock. • Assume that the main memory time (which sets the miss penalty) is 50 ns. Determine the relative performance of the three processors. Hint: processor CPI can be computed by adding the pipeline CPI and cache CPI.

  29. Spring 2009 Architecture 2 • Suppose we have an application running on a 32-processor multiprocessor, which has a 800 ns time to handle reference to a remote memory. For this application, assume that all the references except those involving communication hit in the local memory hierarchy. Processors are stalled on a remote request, and the processor clock rate is 1 GHz. If the base IPC (assuming that all references hit in the cache) is 4, how much faster is the multiprocessor if there is no communication versus if 0.4% of the instructions involve a remote communications reference?

  30. Spring 2009 Architecture 3 • Three enhancements with the following speedups are proposed for a new architecture: • • Speedup1 = 20 • • Speedup2 = 10 • • Speedup3 = 8 • Only one enhancement is usable at a time. • (a) If enhancements 1 and 2 are each usable for 25% of the time, what fraction of the time must enhancement 3 be used to achieve an overall speedup of 10? • (b) Assume the enhancements can be used 25%, 35%, and 10% of the time for enhancements 1, 2, and 3, respectively. For what fraction of the reduced execution time is no enhancement in use?

  31. Fall 2008 Architecture 1 • (Quan, Fall 2008) Your company has just bought a new dual Pentium processor, and you have been tasked with optimizing your software for this processor. You will run two applications on this dual Pentium, but the resource requirements are not equal. The first application needs 75% of the resources, and the other only 25% of the resources. • (a) Given that 60% of the first application is parallelizable, how much speedup would you achieve with that application if run in isolation? • (b) Given that 95% of the second application is parallelizable, how much speedup would this application observe if run in isolation? • (c) Given that 60% of the first application is parallelizable, how much overall system speedup would you observe if you parallelized it, but not the second application? • (d) How much overall system speedup would you achieve if you parallelized both applications, given the information in parts (a) and (b)?

  32. Fall 2008 Architecture 2 • (Quan, Fall 2008) Assume a five-stage single-pipeline microarchitecture (fetch, decode, execute, memory, write back) and the code below. There is no forwarding. • Loop: LW R3,0(R0) • LW R1,0(R3) • ADDI R1,R1,#1 • SUB R4,R3,R2 • SW R1,0(R3) • BNZ R4, Loop

  33. Fall 08 Arch. prob. 2 continued • (a) Show the phases of each instruction per clock cycle for one iteration of the above loop. • (b) How many clock cycles per loop iteration are lost to branch overhead? • (c) Assume a static branch predictor predicting always taken in the Decode stage. • Now how many clock cycles are wasted on branch overhead for this segment of code?

  34. Fall 2008 Architecture 3 • (Quan, Fall 2008) Suppose you have a computer with the following characteristics: • 1) the processor pipeline can run an instruction each cycle • 2) the cache can provide data every cycle (i.e. no penalty for cache hits) • 3) the instruction cache miss rate is 1% • 4) the data cache miss rate is 5% • 5) 20% of instructions are memory instructions • 6) the cache miss penalty is 80 cycles.

  35. Fall 08 Arch. prob. 3 continued • Assume that you have decided to purchase a new computer. For the budget allocated, you can either • 1) purchase a machine with a processor and cache that is twice as fast as your current one (memory speed is the same as the old machine, i.e., the cache miss penalty is 160 cycles), or • 2) purchase a machine with a processor and cache that is the same speed as your old machine but in which the cache is twice as large and the cache miss rate for the programs you run will drop by 40% with this larger cache. • Which computer are you best off purchasing? Explain in detail, showing the relative performance of each choice.

  36. Fall 2004 Exam – True/False • Amdahl’s Law • Same ISA compare by MIPs? • Large variety of memory addr modes  degrade performance due increase in CPIs or ICs • Ideal speedup of pipelined processor = number of stages • Structural hazards can be resolved by adding enough hardware, e.g. floating pt adders etc.

More Related