500 likes | 626 Vues
The Memory Gap: to Tolerate or to Reduce?. Jean-Luc Gaudiot Professor University of California, Irvine. April 2 nd , 2002. Outline. The problem: the Memory Gap Simultaneous Multithreading Decoupled Architecture s Memory Technology Processor-In-Memory. The Memory Latency Problem.
E N D
The Memory Gap: to Tolerate or to Reduce? Jean-Luc Gaudiot Professor University of California, Irvine April 2nd, 2002
Outline • The problem: the Memory Gap • Simultaneous Multithreading • Decoupled Architectures • Memory Technology • Processor-In-Memory
The Memory Latency Problem • Technological Trend: Memory latency is getting longer relative to microprocessor speed (40% per year) • Problem: Memory Latency - Conventional Memory Hierarchy Insufficient: • Many applications have large data sets that are accessed non-contiguously. • Some SPEC benchmarks spend more than half of their time stalling [Lebeck and Wood 1994]. • Domain: benchmarks with large data sets: symbolic, signal processing and scientific programs
Some Solutions • Solution • Larger Caches • Hardware Prefetching • Software Prefetching • Multithreading Limitations • Slow • Works well only if working set fits cache and there is temporal locality. • Cannot be tailored for each application • Behavior based on past and present execution-time behavior • Ensure overheads of prefetching do not outweigh the benefits > conservative prefetching • Adaptive software prefetching is required to change prefetch distance during run-time • Hard to insert prefetches for irregular access patterns • Solves the throughput problem, not the memory latency problem
Limitation of Present Solutions • Huge cache: • Slow and works well only if the working set fits cache and there is some kind of locality • Prefetching • Hardware prefetching • Cannot be tailored for each application • Behavior based on past and present execution-time behavior • Software prefetching • Ensure overheads of prefetching do not outweigh the benefits • Hard to insert prefetches for irregular access patterns • SMT • Enhance the utilization and throughput at thread level
Outline • The problem: the memory gap • Simultaneous Multithreading • Decoupled Architectures • Memory Technology • Processor-In-Memory
Simultaneous Multi-Threading (SMT) • Horizontal and vertical sharing • Hardware support of multiple threads • Functional resources shared by multiple threads • Shared caches • Highest utilization with multi-program or parallel workload
SMT Compared to SS • Superscalar processors execute multiple instructions per cycle • Superscalar functional units idle due to I-fetch stalls, conditional branches, data dependencies • SMT dispatches instructions from multiple data streams, allowing efficient execution and latency tolerance • Vertical sharing (TLP and block multi-threading) • Horizontal sharing (ILP and simultaneous multiple thread instruction dispatch)
CMP Compared to SS • CMP uses thread-level parallelism to increase throughput • CMP has layout efficiency • More functional units • Faster clock rate • CMP hardware partition limits performance • Smaller level-1 resources cause increased miss rates • Execution resources not available from across partition
Wide Issue SS Inefficiencies • Architecture and software limitations • Limited program ILP => idle functional units • Increased waste of speculative execution • Technology issues • Area grows O((d3) {d = issue or dispatch width} • Area grows an additional O(tLog2(t)) {t= #SMT threads} • Increased wire delays (increased area, tighter spacings, thinner oxides, thinner metal) • Increased memory access delays versus processor clock • Larger pipeline penalties Problems solved through: • CMP - localizes processor resources • SMT - efficient use of FUs, latency tolerance • Both CMP and SMT - thread level parallelism
POSM Configurations • All architectures above have eight threads • Which configuration has the highest performance for an average workload? • Run benchmarks on various configurations, find optimal performance point
Superscalar, SMT, CMP, and POSM Processors • CMP and SMT both have higher throughput than superscalar • Combination of CMP/SMT has highest throughput • Experiment results
Equivalent Functional Units • SMT.p1 has highest performance through vertical and horizontal sharing • cmp.p8 has linear increase in performance
Equivalent Silicon Area and System Clock Effects • SMT.p1 throughput is limited • SMT.p1 and POSM.p2 have equivalent single thread performance • POSM.p4 and CMP.p8 have highest throughput
Synthesis • “Comparable silicon resources” are required for processor evaluation • POSM.p4 has 56% more throughput than wide-issue SMT.p1 • Future wide-issue processors are difficult to implement, increasing the POSM advantage • Smaller technology spacings have higher routing delays due to parasitic resistance and capacitance • The larger the processor, the larger the O(d2tLog2(t)) and O(d3t) impact on area and delays • SMT works well with deep pipelines • The ISA and micro-architecture affect SMT overhead • 4-thread x86 SMT would have 1/8th the SMT overhead • Layout and micro-architecture techniques reduces SMT overhead
Outline • The problem: the memory gap • Simultaneous Multithreading • Decoupled Architectures • Memory Technology • Processor-In-Memory
The HiDISC Approach • Observation: • Software prefetching impacts compute performance • PIMs and RAMBUS offer a high-bandwidth memory system - useful for speculative prefetching • Approach: • Add a processor to manage prefetching -> hide overhead • Compiler explicitly manages the memory hierarchy • Prefetch distance adapts to the program runtime behavior
Cache Cache Cache 2nd-Level Cache and Main Memory 2nd-Level Cache and Main Memory 2nd-Level Cache and Main Memory 2nd-Level Cache and Main Memory Decoupled Architectures 8-issue 3-issue 5-issue 2-issue Computation Processor (CP) Computation Processor (CP) Computation Processor (CP) Computation Processor (CP) Registers Registers Registers Registers Access Processor (AP) - (5-issue) Access Processor (AP) - (3-issue) Cache 3-issue 3-issue Cache Mgmt. Processor (CMP) Cache Mgmt. Processor (CMP) 2nd-Level Cache and Main Memory MIPS CAPP HiDISC DEAP (Conventional) (Decoupled) (New Decoupled) DEAP: [Kurian, Hulina, & Caraor ‘94] PIPE: [Goodman ‘85] Other Decoupled Processors: ACRI, ZS-1, WA
L2 Cache and Higher Level What is HiDISC? • A dedicated processor for each level ofthe memory hierarchy • Explicitly manage each level of the memory hierarchy using instructions generated by the compiler • Hide memory latency by converting data access predictability to data access locality (Just in Time Fetch) • Exploit instruction-level parallelism without extensive scheduling hardware • Zero overhead prefetches for maximal computation throughput Computation Processor (CP) 2-issue Registers Store Address Queue Load Data Queue Access Processor (AP) Store Data Queue Slip Control Queue 3-issue L1 Cache Cache Mgmt. Processor (CMP) 3-issue HiDISC
Slip Control Queue • The Slip Control Queue (SCQ) adapts dynamically • Late prefetches = prefetched data arrived after load had been issued • Useful prefetches = prefetched data arrived before load had been issued if (prefetch_buffer_full ()) Don’t change size of SCQ; else if ((2*late_prefetches) > useful_prefetches) Increase size of SCQ; else Decrease size of SCQ;
Decoupling Programs for HiDISC(Discrete Convolution - Inner Loop) while (not EOD) y = y + (x * h); send y to SDQ Computation Processor Code for (j = 0; j < i; ++j) { load (x[j]); load (h[i-j-1]); GET_SCQ; } send (EOD token) send address of y[i] to SAQ for (j = 0; j < i; ++j) y[i]=y[i]+(x[j]*h[i-j-1]); Inner Loop Convolution Access Processor Code SAQ: Store Address Queue SDQ: Store Data Queue SCQ: Slip Control Queue EOD: End of Data for (j = 0; j < i; ++j) { prefetch (x[j]); prefetch (h[i-j-1]; PUT_SCQ; } Cache Management Code
Benchmarks Benchmarks Source of Lines of Description Data Benchmark Source Set Code Size LLL1 Livermore 20 1024 - element 24 KB Loops [45] arrays, 100 iterations LLL2 Livermore 24 1024 - element 16 KB Loops arrays, 100 iterations LLL3 Livermore 18 1024 - element 16 KB Loops a rrays, 100 iterations LLL4 Livermore 25 1024 - element 16 KB Loops arrays, 100 iterations LLL5 Livermore 17 1024 - element 24 KB Loops arrays, 100 iterations Tomcatv SPECfp95 [68] 190 33x33 - element <64 KB matrices, 5 iterations MXM NAS kernels [5] 11 3 Unrolled matrix 448 KB multiply, 2 iterations CHOLSKY NAS kernels 156 Cholesky matrix 724 KB decomposition VPENTA NAS kernels 199 Invert three 128 KB pentadiagonals simultaneously Qsort Quicksort 58 Quicksort 128 KB sorting algorithm [14]
LLL3 Tomcatv 5 3 MIPS MIPS DEAP DEAP 2.5 4 CAPP CAPP HiDISC HiDISC 2 3 1.5 2 1 1 0.5 0 0 200 0 40 80 120 160 200 0 40 80 120 160 Main Memory Latency Main Memory Latency Vpenta Cholsky 12 MIPS MIPS 16 DEAP 10 14 DEAP CAPP 12 HiDISC 8 CAPP 10 HiDISC 6 8 4 6 4 2 2 0 0 0 40 80 120 160 200 0 40 80 120 160 200 Main Memory Latency Main Memory Latency Simulation Results
VLSI Layout Overhead (I) • Goal: Cost effectiveness of HiDISC architecture • Cache has become a major portion of the chip area • Methodology: Extrapolated HiDISC VLSI Layout based on MIPS10000 processor (0.35 μm, 1996) • The space overhead of HiDISC is extrapolated to be 11.3% more than a comparable MIPS processor • The benchmark should be run again using these parameters and new memory architectures
The Flexi-DISC • Fundamental characteristics: • inherently highly dynamic at execution time. • Dynamic reconfigurable central computational kernel (CK) • Multiple levels of caching and processing around CK • adjustable prefetching • Multiple processors on a chip which will provide for a flexible adaptation from multiple to single processors and horizontal sharing of the existing resources.
The Flexi-DISC • Partitioning of Computation Kernel • It can be allocated to the different portions of the application or different applications • CK requires separation of the next ring to feed it with data • The variety of target applications makes the memory accesses unpredictable • Identical processing units for outer rings • Highly efficient dynamic partitioning of the resources and their run-time allocation can be achieved
Multiple HiDISC: McDISC • Problem: All extant, large-scale multiprocessors perform poorly when faced with a tightly-coupled parallel program. • Reason: Extant machines have a long latency when communication is needed between nodes. This long latency kills performance when executing tightly-coupled programs. (Note that multi-threading à la Tera does not help when there are dependencies.) • The McDISC solution: Provide the network interface processor (NIP) with a programmable processor to execute not only OS code (e.g. Stanford Flash), but user code, generated by the compiler. • Advantage: The NIP, executing user code, fetches data before it is needed by the node processors, eliminating the network fetch latency most of the time. • Result: Fast execution (speedup) of tightly-coupled parallel programs.
The McDISC System: Memory-Centered Distributed Instruction Set Computer
Summary • A processor for each level of the memory hierarchy • Adaptive memory hierarchy management • Reduces memory latency for systems with high memory bandwidths (PIMs, RAMBUS) • 2x speedup for scientific benchmarks • 3x speedup for matrix decomposition/substitution (Cholesky) • 7x speedup for matrix multiply (MXM) (similar results expected for ATR/SLD)
Outline • The problem: the memory gap • Simultaneous Multithreading • Decoupled Architectures • Memory Technology • Processor-In-Memory
Memory Technology • New DRAM technologies • DDR DRAM, SLDRAM and DRDRAM • Most DRAM technologies achieve higher bandwidth • Integrating memory and processor on a single chip (PIM and IRAM) • Bandwidth and memory access latency sharply improve
New Memory Technologies (Cont.) • Rambus DRAM (RDRAM) • memory interleaving system integrated onto a single memory chip • Four outstanding requests with pipelined micro architecture • Operates at much higher frequencies than SDRAM • Direct Rambus DRAM (DRDRAM) • Direct control of all row and column resources concurrently with data transfer operations • Current DRDRAM can achieve 1.6 Gbytes/sec bandwidth transferring on both clock edges
Intelligent RAM (IRAM) • Merging technology of processor and memory • All the memory accesses remain within a single chip • Bandwidth can be as high as 100 to 200 Gbytes/sec • Access latency is less than 20ns • Good solution for data intensive streaming application
Vector IRAM • Cost effective system • Incorporates vector processing units and the memory system on a single chip • Beneficial for the multimedia application with critical DSP features • Good energy efficiency • Attractive for future mobile computing processors
Outline • The problem: the memory gap • Simultaneous Multithreading • Decoupled Architectures • Memory Technology • Processor-In-Memory
Overview of the System • Proposed DCS (Data-intensive Computing System) Architecture
DCS System (Cont’d) • Programming • Different from the conventional programming model • Applications are divided into two separate sections • Software : Executed by the host processor • Hardware : Executed by the CMP • The programmer must use CMP instructions • CMP • Several CMPs can be connected to the system bus • Variable CMP size and configuration depending on the amount and complexity of job it has to handle • Variable size, function and location of logics inside of CMP to better handle the application. • Memory, Coprocessors, I/O
CMP Architecture • CMP (Computational Memory Processor) Architecture • The Heart of our work • Responsible for executing the core operation of data-intensive applications • Attached to the system bus • CMP instructions are encapsulated in the normal memory operations. • Consists of many ACME (Application-specific Computational Memory Element) cells interconnected amongst themselves through dedicated communication links • CMC(Computing Memory Cluster) • A small number of ACME cells are put together to form a CMC • The Network for connecting the CMCs are separate from the memory decoder
ACME Architecture • ACME (Application-specific Computational Memory Elements) Architecture • ACME-memory, configuration cache, CE (Computing Element), FSM • CE is the reconfigurable computing unit and consists of many CC (Computing Cells) • FSM govern the overall execution of the ACME
Synchronization and Interface • Three different kinds of communications • Host processor with CMP (eventually with each ACME) • Done by synchronization variables (specific memory locations) located inside the memory of each ACME cells • Example : start and end signals for operations. CMP instructions for each ACME • ACME to ACME • Two different approaches • Host mediated • Simple • Not practical for frequent communications • Distributed mediated approach • Expensive and complex • Efficient • CMP to CMP
Benefits of the Paradigm • All the benefits from being the PIM • Increased bandwidth and Reduced latency • Faster Computation • Parallel execution among many ACMEs • Effective usage of the full memory bandwidth • Efficient co-existence of Software and Hardware • More parallel execution inside of ACMEs by efficiently configuring the structure with considerations for applications • Scalability
Implementation of the CMP • Projected how our CMP will be implemented… • According to 2000 edition of ITRS (International Technology Roadmap for Semiconductors), in year 2008 • A High-end MPU with 1.381 billion transistors will be in production with 0.06um technology and 427mm2 • If half of the die size is allocated to memory, 8.13 Gbits storage will be available and 690 million transistors for logic • There can be 2048 ACME cells with each 512Kbytesof memory and 315K transistors for logic, control, anything inside ACME and rest of resources (36M transistors) for interconnections inside.
Motion Estimation of MPEG • Finding the motion vectors for a macro block in the frame. • It absorbs about 70% of the total execution time of MPEG • Huge amount of simple addition, subtraction and comparisons
Example ME execution • One ACME structure to find a motion vector for a macro block • Executes in pipelined fashion reusing the data
Example ME execution • Performance • For a 8*8 macro block with 8 pixel displacement • 276 clock cycles to find the motion vector for one macro block • Performance comparison with other architectures