1 / 67

Latency vs. Bandwidth Which Matters More?

This article discusses the importance of understanding latency and bandwidth in memory systems and their impact on performance. It explores the bottlenecks in memory-intensive applications and presents measurements and analyses on memory bandwidth. The article also highlights the role of PIM systems and the performance of VIRAM in relation to latency and bandwidth.

Télécharger la présentation

Latency vs. Bandwidth Which Matters More?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Latency vs. BandwidthWhich Matters More? Katherine Yelick U.C. Berkeley and LBNL Joint with with: Xiaoye Li, Lenny Oliker, Brian Gaeke, Parry Husbands (LBNL) The Berkeley IRAM group: Dave Patterson, Joe Gebis, Dave Judd, Christoforos Kozyrakis, Sam Williams,… The Berkeley Bebop group: Jim Demmel, Rich Vuduc, Ben Lee, Rajesh Nishtala,…

  2. µProc 60%/yr. 1000 CPU 100 Processor-Memory Performance Gap:(grows 50% / year) Performance 10 DRAM 7%/yr. DRAM 1 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time Blame the Memory Bus • Many scientific applications run at less than 10% of hardware peak, even on a single processor • The trend is to blame the memory bus • Is this accurate? • Need to understand bottlenecks to • Design better machines • Design better algorithms • Two parts • Algorithm bottlenecks on microprocessors • Bottlenecks on a PIM system, VIRAM Note: this is latency, not bandwidth.

  3. Memory Intensive Applications • Poor performance is especially problematic for memory-intensive applications • Low ratio of arithmetic operations to memory • Irregular memory access patterns • Example • Sparse matrix-vector multiply (dominant kernel of NAS CG) • Many scientific applications do this by some perspective • Compute y = y + A*x • Matrix is stored as two main arrays: • Column index array (int) • Value array (floating point) • For each element y[i] compute • Sj x[index[j]] * value[j] • So latency (to x) dominates, right? • Irregular • Not necessarily in cache x y

  4. Performance Model is Revealing • A simple analytical model for sparse matvec kernel • # loads from memory * cost of load + # loads from cache … • Two versions: • Only compulsory misses to source vector, x • All accesses to x produce a miss to memory • Conclusion • Cache misses to source (memory latency) is not the dominant cost • PAPI measurements confirm • So bandwidth to the matrix dominates, right?

  5. Memory Bandwidth Measurements • Yes, but be careful about how you measure bandwidth • Not a constant

  6. An Architectural Probe • Sqmat is a tunable probe to measure architectures • Stream of small matrices • Square each matrix to some power: computational intensity • The stream may be direct (dense), or indirect (sparse) • If indirect, how frequently is there a non-unit stride jump • Parameters: • Matrix size within stream • Computational Intensity • Indirection (yes/no) • # unit strides before jump . . . . . .

  7. Cost of Indirection • Adding a second load stream for indexes into stream has a big effect on some machines • This is truly a bandwidth issue

  8. Cost of Irregularity Opteron • Slowdown relative to the previous slide results • Even a tiny bit of irregularity (1/S) can have a big effect Itanium2 Power3 Power4

  9. What Does This Have to Do with PIMs? • Performance of Sqmat on PIMs and others for 3x3 matrices, squared 10 times (high computational intensity!) • Imagine much faster for long streams, slower for short ones

  10. 14.5 mm 20.0 mm VIRAM Overview • Technology: IBM SA-27E • 0.18mm CMOS, 6 metal layers • 290 mm2 die area • 225 mm2 for memory/logic • Transistor count: ~130M • 13 MB of DRAM • Power supply • 1.2V for logic, 1.8V for DRAM • Typical power consumption: 2.0 W • 0.5 W (scalar) + 1.0 W (vector) + 0.2 W (DRAM) + 0.3 W (misc) • MIPS Scalar core + 4-lane vector • Peak vector performance • 1.6/3.2/6.4 Gops wo. multiply-add (64b/32b/16b operations) • 3.2/6.4 /12.8 Gops w. madd • 1.6 Gflops (single-precision)

  11. 91 instructions • 660 opcodes 8 16 32 64 Vector IRAM ISA Summary Scalar MIPS64 scalar instruction set .v .vv .vs .sv s.int u.int s.fp d.fp Vector ALU alu op Vector Memory unit stride constant stride indexed load store s.int u.int ALU operations: integer, floating-point, fixed-point and DSP, convert, logical, vector processing, flag processing

  12. VIRAM Compiler • Based on the Cray’s production compiler • Challenges: • narrow data types and scalar/vector memory consistency • Advantages relative to media-extensions: • powerful addressing modes and ISA independent of datapath width Frontends Optimizer Code Generators C T3D/T3E Cray’s PDGCS C++ C90/T90/X1 Fortran95 SV2/VIRAM

  13. Compiler and OS Enhancements • Compiler based on Cray PDGCS • Outer-loop vectorization • Strided and indexed vector loads and stores • Vectorization of loops with if statements • Full predicated execution of vector instructions using flag registers • Vectorization of reductions and FFTs • Instructions for simple, intra-register permutations • Automatic for reductions, manual (or StreamIT) for FFTs • Vectorization of loops with break statements • Software speculation support for vector loads • OS development • MMU-based virtual memory • OS performance • Dirty and valid bits for registers to reduce context switch overhead

  14. Visible to SW Visible to SW Transparent to SW Transparent to SW HW Resources Visible to Software Vector IRAM Pentium III • Software (applications/compiler/OS) can control • Main memory, registers, execution datapaths

  15. VIRAM Chip Statistics

  16. VIRAM Design Statistics

  17. DRAM DRAM VIRAM Chip • Taped out to IBM in October ‘02 • Received wafers in June 2003. • Chips were thinned, diced, and packaged. • Parts were sent to ISI, who produced test boards. MIPS 4 64-bit Vector Lanes I/O

  18. Based on the MIPS Malta development board PCI, Ethernet, AMR, IDE, USB, CompactFlash, parallel, serial VIRAM daughter-card Designed at ISI-East VIRAM processor Galileo GT64120 chipset 1 DIMM slot for external DRAM Software support and OS Monitor utility for debugging Modified version of MIPS Linux Demonstration System

  19. Benchmarks for Scientific Problems • Dense and Sparse Matrix-vector multiplication • Compare to tuned codes on conventional machines • Transitive-closure (small & large data set) • On a dense graph representation • NSA Giga-Updates Per Second (GUPS, 16-bit & 64-bit) • Fetch-and-increment a stream of “random” addresses • Sparse matrix-vector product: • Order 10000, #nonzeros 177820 • Computing a histogram • Used for image processing of a 16-bit greyscale image: 1536 x 1536 • 2 algorithms: 64-elements sorting kernel; privatization • Also used in sorting • 2D unstructured mesh adaptation • initial grid: 4802 triangles, final grid: 24010

  20. Sparse MVM Performance • Performance is matrix-dependent: lp matrix • compiled for VIRAM using “independent” pragma • sparse column layout • Sparsity-optimized for other machines • sparse row (or blocked row) layout MFLOPS

  21. Power and Performance on BLAS-2 • 100x100 matrix vector multiplication (column layout) • VIRAM result compiled, others hand-coded or Atlas optimized • VIRAM performance improves with larger matrices • VIRAM power includes on-chip main memory • 8-lane version of VIRAM nearly doubles MFLOPS

  22. Performance Comparison • IRAM designed for media processing • Low power was a higher priority than high performance • IRAM (at 200MHz) is better for apps with sufficient parallelism

  23. Power Efficiency • Same data on a log plot • Includes both low power processors (Mobile PIII) • The same picture for operations/cycle

  24. Which Problems are Limited by Bandwidth? • What is the bottleneck in each case? • Transitive and GUPS are limited by bandwidth (near 6.4GB/s peak) • SPMV and Mesh limited by address generation and bank conflicts • For Histogram there is insufficient parallelism

  25. Summary of 1-PIM Results • Programmability advantage • All vectorized by the VIRAM compiler (Cray vectorizer) • With restructuring and hints from programmers • Performance advantage • Large on applications limited only by bandwidth • More address generators/sub-banks would help irregular performance • Performance/Power advantage • Over both low power and high performance processors • Both PIM and data parallelism are key

  26. “VIRAM-4Lane” 4 lanes, 8 Mbytes ~190 mm2 3.2 Gops at 200MHz Alternative VIRAM Designs “VIRAM-2Lanes” 2 lanes, 4 Mbytes ~120 mm2 1.6 Gops at 200MHz “VIRAM-Lite” 1 lanes, 2 Mbytes ~60 mm2 0.8 Gops at 200MHz

  27. Compiled Multimedia Performance • Single executable for multiple implementations • Linear scaling with number of lanes • Remember, this is a 200MHz, 2W processor floating-point integer

  28. Third Party Comparison (I) VIRAM Imagine Imagine VIRAM VIRAM Imagine PPC-G4 Pentium III Pentium III PPC-G4 Pentium III PPC-G4

  29. Third Party Comparison (II) VIRAM Imagine VIRAM VIRAM Imagine Imagine PPC-G4 Pentium III PPC-G4 Pentium III PPC-G4 Pentium III

  30. Vectors VS. SIMD or VLIW • SIMD • Short, fixed-length, vector extensions • Require wide issue or ISA change to scale • They don’t support vector memory accesses • Difficult to compile for • Performance wasted for pack/unpack, shifts, rotates… • VLIW • Architecture for instruction level parallelism • Orthogonal to vectors for data parallelism • Inefficient for data parallelism • Large code size (3X for IA-64?) • Extra work for software (scheduling more instructions) • Extra work for hardware (decode more instructions)

  31. Vector Vs. Wide Word SIMD: Example • Vector instruction sets have • Strided and scatter/gather load/store operations • SIMD extensions load contiguous memory • Implementation-independent vector length • SIMD extensions change ISA with bit wide in hardware • Simple example: conversion from RGB to YUV • Thanks to Christoforos Kozyrakis Y = [( 9798*R + 19235*G + 3736*B) / 32768] U = [(-4784*R - 9437*G + 4221*B) / 32768] + 128 V = [(20218*R – 16941*G – 3277*B) / 32768] + 128

  32. VIRAM Code RGBtoYUV: vlds.u.b r_v, r_addr, stride3, addr_inc # load R vlds.u.b g_v, g_addr, stride3, addr_inc # load G vlds.u.b b_v, b_addr, stride3, addr_inc # load B xlmul.u.sv o1_v, t0_s, r_v # calculate Y xlmadd.u.sv o1_v, t1_s, g_v xlmadd.u.sv o1_v, t2_s, b_v vsra.vs o1_v, o1_v, s_s xlmul.u.sv o2_v, t3_s, r_v # calculate U xlmadd.u.sv o2_v, t4_s, g_v xlmadd.u.sv o2_v, t5_s, b_v vsra.vs o2_v, o2_v, s_s vadd.sv o2_v, a_s, o2_v xlmul.u.sv o3_v, t6_s, r_v # calculate V xlmadd.u.sv o3_v, t7_s, g_v xlmadd.u.sv o3_v, t8_s, b_v vsra.vs o3_v, o3_v, s_s vadd.sv o3_v, a_s, o3_v vsts.b o1_v, y_addr, stride3, addr_inc # store Y vsts.b o2_v, u_addr, stride3, addr_inc # store U vsts.b o3_v, v_addr, stride3, addr_inc # store V subu pix_s,pix_s, len_s bnez pix_s, RGBtoYUV

  33. RGBtoYUV: movq mm1, [eax] pxor mm6, mm6 movq mm0, mm1 psrlq mm1, 16 punpcklbw mm0, ZEROS movq mm7, mm1 punpcklbw mm1, ZEROS movq mm2, mm0 pmaddwd mm0, YR0GR movq mm3, mm1 pmaddwd mm1, YBG0B movq mm4, mm2 pmaddwd mm2, UR0GR movq mm5, mm3 pmaddwd mm3, UBG0B punpckhbw mm7, mm6; pmaddwd mm4, VR0GR paddd mm0, mm1 pmaddwd mm5, VBG0B movq mm1, 8[eax] paddd mm2, mm3 movq mm6, mm1 paddd mm4, mm5 movq mm5, mm1 psllq mm1, 32 paddd mm1, mm7 punpckhbw mm6, ZEROS movq mm3, mm1 pmaddwd mm1, YR0GR movq mm7, mm5 pmaddwd mm5, YBG0B psrad mm0, 15 movq TEMP0, mm6 movq mm6, mm3 pmaddwd mm6, UR0GR psrad mm2, 15 paddd mm1, mm5 movq mm5, mm7 pmaddwd mm7, UBG0B psrad mm1, 15 pmaddwd mm3, VR0GR packssdw mm0, mm1 pmaddwd mm5, VBG0B psrad mm4, 15 movq mm1, 16[eax] MMX Code (1)

  34. paddd mm6, mm7 movq mm7, mm1 psrad mm6, 15 paddd mm3, mm5 psllq mm7, 16 movq mm5, mm7 psrad mm3, 15 movq TEMPY, mm0 packssdw mm2, mm6 movq mm0, TEMP0 punpcklbw mm7, ZEROS movq mm6, mm0 movq TEMPU, mm2 psrlq mm0, 32 paddw mm7, mm0 movq mm2, mm6 pmaddwd mm2, YR0GR movq mm0, mm7 pmaddwd mm7, YBG0B packssdw mm4, mm3 add eax, 24 add edx, 8 movq TEMPV, mm4 movq mm4, mm6 pmaddwd mm6, UR0GR movq mm3, mm0 pmaddwd mm0, UBG0B paddd mm2, mm7 pmaddwd mm4, pxor mm7, mm7 pmaddwd mm3, VBG0B punpckhbw mm1, paddd mm0, mm6 movq mm6, mm1 pmaddwd mm6, YBG0B punpckhbw mm5, movq mm7, mm5 paddd mm3, mm4 pmaddwd mm5, YR0GR movq mm4, mm1 pmaddwd mm4, UBG0B psrad mm0, 15 paddd mm0, OFFSETW psrad mm2, 15 paddd mm6, mm5 movq mm5, mm7 MMX Code (2)

  35. pmaddwd mm7, UR0GR psrad mm3, 15 pmaddwd mm1, VBG0B psrad mm6, 15 paddd mm4, OFFSETD packssdw mm2, mm6 pmaddwd mm5, VR0GR paddd mm7, mm4 psrad mm7, 15 movq mm6, TEMPY packssdw mm0, mm7 movq mm4, TEMPU packuswb mm6, mm2 movq mm7, OFFSETB paddd mm1, mm5 paddw mm4, mm7 psrad mm1, 15 movq [ebx], mm6 packuswb mm4, movq mm5, TEMPV packssdw mm3, mm4 paddw mm5, mm7 paddw mm3, mm7 movq [ecx], mm4 packuswb mm5, mm3 add ebx, 8 add ecx, 8 movq [edx], mm5 dec edi jnz RGBtoYUV MMX Code (3)

  36. Summary • Combination of Vectors and PIM • Simple execution model for hardware – pushes complexity to compiler • Low power/footprint/etc. • PIM provides bandwidth needed by vectors • Vectors hid latency effectively • Programmability • Programmable from “high” level language • More compact instruction stream • Works well for: • Applications with fine-grained data parallelism • Memory intensive problems • Both scientific and multimedia applications

  37. The End

  38. Algorithm Space Search Two-sided dense linear algebra Grobner Basis (“Symbolic LU”) FFTs Sorting Reuse Sparse iterative solvers Asynchronous discrete even simulation Sparse direct solvers One-sided dense linear algebra Regularity

  39. 14.5 mm 20.0 mm VIRAM Overview • MIPS core (200 MHz) • Single-issue, 8 Kbyte I&D caches • Vector unit (200 MHz) • 32 64b elements per register • 256b datapaths, (16b, 32b, 64b ops) • 4 address generation units • Main memory system • 13 MB of on-chip DRAM in 8 banks • 12.8 GBytes/s peak bandwidth • Typical power consumption: 2.0 W • Peak vector performance • 1.6/3.2/6.4 Gops wo. multiply-add • 1.6 Gflops (single-precision) • Fabrication by IBM • Tape-out in O(1 month)

  40. Benchmarks for Scientific Problems • Dense Matrix-vector multiplication • Compare to hand-tuned codes on conventional machines • Transitive-closure (small & large data set) • On a dense graph representation • NSA Giga-Updates Per Second (GUPS, 16-bit & 64-bit) • Fetch-and-increment a stream of “random” addresses • Sparse matrix-vector product: • Order 10000, #nonzeros 177820 • Computing a histogram • Used for image processing of a 16-bit greyscale image: 1536 x 1536 • 2 algorithms: 64-elements sorting kernel; privatization • Also used in sorting • 2D unstructured mesh adaptation • initial grid: 4802 triangles, final grid: 24010

  41. Power and Performance on BLAS-2 • 100x100 matrix vector multiplication (column layout) • VIRAM result compiled, others hand-coded or Atlas optimized • VIRAM performance improves with larger matrices • VIRAM power includes on-chip main memory • 8-lane version of VIRAM nearly doubles MFLOPS

  42. Performance Comparison • IRAM designed for media processing • Low power was a higher priority than high performance • IRAM (at 200MHz) is better for apps with sufficient parallelism

  43. Power Efficiency • Huge power/performance advantage in VIRAM from both • PIM technology • Data parallel execution model (compiler-controlled)

  44. Power Efficiency • Same data on a log plot • Includes both low power processors (Mobile PIII) • The same picture for operations/cycle

  45. Which Problems are Limited by Bandwidth? • What is the bottleneck in each case? • Transitive and GUPS are limited by bandwidth (near 6.4GB/s peak) • SPMV and Mesh limited by address generation and bank conflicts • For Histogram there is insufficient parallelism

  46. Summary of 1-PIM Results • Programmability advantage • All vectorized by the VIRAM compiler (Cray vectorizer) • With restructuring and hints from programmers • Performance advantage • Large on applications limited only by bandwidth • More address generators/sub-banks would help irregular performance • Performance/Power advantage • Over both low power and high performance processors • Both PIM and data parallelism are key

  47. Analysis of a Multi-PIM System • Machine Parameters • Floating point performance • PIM-node dependent • Application dependent, not theoretical peak • Amount of memory per processor • Use 1/10th Algorithm data • Communication Overhead • Time processor is busy sending a message • Cannot be overlapped • Communication Latency • Time across the network (can be overlapped) • Communication Bandwidth • Single node and bisection • Back-of-the envelope calculations !

  48. Real Data from an Old Machine (T3E) • UPC uses a global address space • Non-blocking remote put/get model • Does not cache remote data

  49. Running Sparse MVM on a Pflop PIM • 1 GHz * 8 pipes * 8 ALUs/Pipe = 64 GFLOPS/node peak • 8 Address generators limit performance to 16 Gflops • 500ns latency, 1 cycle put/get overhead, 100 cycle MP overhead • Programmability differences too: packing vs. global address space

  50. Effect of Memory Size • For small memory nodes or smaller problem sizes • Low overhead is more important • For large memory nodes and large problems packing is better

More Related