1 / 42

GPU Computing Techniques

GPU Computing Techniques. Using CUDA. CUDA & the GPU:. Host Interface & Giga Thread Engine. Raster Engine. Raster Engine. GPU Cores. GPU Cores. Memory Controller. Shared L1 Cache. GPU Cores. GPU Cores. Raster Engine. CUDA & the GPU:. Host Interface & Giga Thread Engine.

kreeli
Télécharger la présentation

GPU Computing Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GPU Computing Techniques Using CUDA

  2. CUDA & the GPU: Host Interface & Giga Thread Engine Raster Engine Raster Engine GPU Cores GPU Cores Memory Controller Shared L1 Cache GPU Cores GPU Cores Raster Engine

  3. CUDA & the GPU: Host Interface & Giga Thread Engine Raster Engine Raster Engine GPU Cores GPU Cores Memory Controller Shared L1 Cache GPU Cores Raster Engine

  4. ? CUDA & the GPU: Host Interface & Giga Thread Engine Raster Engine Raster Engine GPU Cores Memory Controller Shared L1 Cache GPU Cores Raster Engine

  5. Optimization Techniques • Areas in which performance gains can be achieved: - Memory Optimization • L1, L2,Global Memory, Shared Memory, etc… - Increasing Parallelism Between GPU/CPU

  6. Improving GPU Performance: • Global Memory Coalescing • Shared Memory & Bank Conflicts • L1 Cache Performance • GPU-CPU Interaction Optimization

  7. Global Memory Coalescing Using CUDA

  8. What is Global Memory?

  9. Modern DRAMs (dynamic random access memories) use a parallel process to increase their rate of data access. Each time a location is accessed, many consecutive locations that includes the requested location are accessed. Once detected, the data from all of these consecutive locations in the global memory can be transferred to the processor at high speed. [PM01] Global Memory Bandwidth

  10. Global Memory Bandwidth • When all threads in a warp (32 threads) execute a load instruction, the hardware detects whether the threads access consecutive global memory locations. • Then a kernel arranges its data accesses so that each request to consecutive DRAM locations can be identified. • The GPU allows the programmers to achieve high global memory accesses of threads into favorable patterns – the same instruction for all threads in a warp accesses consecutive global memory locations.

  11. Example - Matrix • Coalesced pattern (B) • Threads in warp 0 reads element 0 of columns 0 through 31. • Threads in warp 1 reads element 1 of columns 0 through 31. • and so on…

  12. Global Memory Coalescing - Matrix • How these matrix elements are placed into the global memory: • All elements in a row are placed in a consecutive locations. (row major order)

  13. Global Memory Coalescing - Matrix • Favorable matrix data access pattern: • The hardware detects that these accesses are to consecutive locations in the global memory.

  14. Global Memory Coalesing - Matrix • Not coalesced memory layout:

  15. Shared Memory & Bank Conflicts Using CUDA

  16. Shared Memory • Because it is on-chip, the shared memory space is much faster than the global memory spaces. • To achieve high bandwidth, shared memory is divided into equally-sized memory modules, called banks.

  17. Shared Memory and Banks • Maximum amount of shared memory per multiprocessor — 48 KB • In Hydra, 48 KB × 14 (mp) = 672 KB • There are 32 banks, which are organized such that successive 32-bit words are assigned to successive banks • Each bank has a bandwidth of 32 bits per two clock cycles.

  18. Shared Memory Banks • Banks can be accessed simultaneously. • Memory access request made of n addresses that fall in n distinct memory banks can be serviced simultaneously, yielding an overall bandwidth that is n times as high as the bandwidth of a single module. • If n memory bank accesses request attempt to access the same memory bank, there is a bank conflict (called an n-way bank conflict) → The access has to be serialized by hardware → Results in decreasing throughput

  19. Strided Shared Memory Accesses • Left: Linear addressing with a stride of one 32-nit word (no bank conflict) • Middle: Linear Addressing with a stride of two 32-bit words (2-way bank conflicts) • Right: Linear addressing with a stride of three 32-bit words (no bank conflict)

  20. Access Efficiency • Left: Random Access (No Conflict) • Middle: Random Conflict (5-way Bank Conflict) • Right: Bad Approach to Bank Access (32-way Bank Conflict)

  21. Broadcast Access • A 32-bit word can be read and broadcast to several threads simultaneously when servicing one memory read request. • No Conflicts result, because the broad cast is only about reading data.

  22. Irregular & Colliding Shared Memory Access • Left: Conflict-free access via random permutation • Middle: Conflict-free access since threads 3, 4, 6, 7,and 9, access the same word within bank 5 • Right: Conflict-free broadcast access (all threads access the same word)

  23. Bank Conflict Example • 8-bit and 16-bit accesses typically generate bank conflicts. __shared__ char shared[32]; char data = shared[BaseIndex + tid]; • shared[0], shared[1], shared[2], and shared[3], for example, belong to the same bank 32 bit . . . shared[0] shared[1] shared[2] shared[3] . . . shared[31]

  24. Bank Conflict Example (continued) • No bank conflicts char data = shared[BaseIndex + 4 * tid]; . . . shared[0] shared[1] shared[2] shared[3] . . .

  25. Effects of L1 Cache Manipulation Using CUDA

  26. What Does L1 Do? • When a memory location is request, the L1 cache is queried first. If the address is not found (this is called a miss), the L2 cache is queried, and so on, until main memory is accessed. • When a memory request is serviced, the resulting memory address is populated all the way through the caches, from main memory to L1. • Next memory call to that address should be serviced quicker, as long as it is not displaced from the L1 cache.

  27. Why Turn Off L1 Cache? • L1 Caches exist to improve memory request performance by increasing request throughput and minimizing request latency. • Why on earth would you want to disable the L1 Cache? • Can We Disable L2 or L3?

  28. Toggling L1 in GPU • An available compiler option when compiling any code is to disable the L1 cache. • When compiling CUDA code, each mini-program can be compiled with or without L1 cache enabled. • Benchmark each executable to see if the code runs faster with or without L1 enabled. • When the main application is compiled, the application will link with the CUDA mini-executables.

  29. CPU-GPU Interaction Optimization Using CUDA

  30. CPU-GPU interaction • One of the key optimization for any GP-GPU application. Cause: • PCI-Bandwidth much lower than GPU memory bandwidth. • 1.6 to 8GB/s vs 1774 GB/s • Problems faced Host Memory Global Memory PCI Express 8GB/s PCI Express 8GB/s 50 GB/s 175 GB/s CPU GPU Motherboard Graphics Card

  31. Remedy: CPU-GPU Data Transfers • Minimal Transfer • Intermediate data directly on GPU • Move Codes with less data transfer to GPU • Group Transfer • One larger transfer rather than multiple smaller transfer.

  32. Short-comings In Remedies • Minimal transfer is not applicable for all kind of GP-GPU applications. • Group transfer does not reduce or hide the CPU-GPU data transfer latency. Hence There is need for the optimization of data transfers

  33. Optimizations by CUDA • Pinned or Non-pagable memory optimization • Decrease the time to copy data from CPU-GPU • Optimization through multiple streams. • Hides the transfer time by overlapped execution of kernel and memory transfers.

  34. Pinned memory • What is Pinned or page locked memory ? • Not paged in or out by OS. • Pinned memory enables • Faster PCI-e copies • Memory copies are asynchronous with CPU • Memory copies are asynchronous with GPU • Zero-copy • cudaMemcpy(dest, src, size, direction); • Drawback is it reduces the RAM available for OS

  35. Concurrent Execution betweenHost and Device • In order to facilitate concurrent execution between host and device, some function calls are asynchronous • Examples of asynchronous calls • Kernel launches • Device ↔ device memory copies • Host ↔ device memory copies • CudaMemCpyAsyn(dest, src, size, direction, stream#);

  36. Overlapping executions concern • When is this overlapping useful? • Note that there is a issue with this idea: • The device execution stack is FIFO • This would prevent overlapping execution with data transfer • This issue was addressed by the use of CUDA “streams”

  37. CUDA Streams: Overview • A stream is a sequence of CUDA commands that execute in order • Look at a stream as a queue of GPU operations • One host thread can define multiple CUDA streams • What are the typical operations in a stream? • Invoking a data transfer • Invoking a kernel execution • Handling events

  38. Streams and Asynchronous Calls • Default API • Kernel launches are asynchronous with CPU. • Memory copies (H2D or D2H) block CPU thread. • Streams and Asynchronous functions provide • Memory copies asynchronous with CPU • Operation in different streams can be overlapped • A kernel and memory copies in different Streams can be overlapped

  39. Overlap of kernel and memory copy using CUDA streams • Requirements • D2H and H2D memcopy from pinned memory • Kernel and memcopy in different, non-zero streams • Code: cudaStream_t stream1, stream2; cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudeMemcpyAsync(dst, src, size, dir, stream1); kernel<<<grid, block, 0, stream2>>>(…); Potentially Overlapped

  40. In Summary Thus, Therefore, However …

  41. Project Research Goals • Implement and Analyze various CUDA /GPU applications that demonstrate the previously talked about techniques and issue, such as: • Shared Memory Coalescing • Bank Access in Shared Memory • Effects of L1 Availability • Explain how and why streams can optimize CPU/GPU data transfers. • Study experiment to determine if assumptions match hypothetical results.

  42. References [NV01] NVIDIA CUDA C Programming Guide, Version 4.0, May 2011. Section 5.3.2.3 [PM01] David B. Kirk, Wen-mei W. Hwu, Programming Massively Parallel Processors, Nvidia Corporation, 2010. pp 103-108. [BP01] NVidia Cuda C Best Practice Guide, Version 4.0, May 2011, Section 3.2.1, pp 25-30

More Related