1 / 28

An Introduction to Programming with CUDA Paul Richmond

An Introduction to Programming with CUDA Paul Richmond. GPUComputing@Sheffield http://gpucomputing.sites.sheffield.ac.uk/. Overview. Motivation Introduction to CUDA Kernels CUDA Memory Management. Overview. Motivation Introduction to CUDA Kernels CUDA Memory Management.

nibal
Télécharger la présentation

An Introduction to Programming with CUDA Paul Richmond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to Programming with CUDAPaul Richmond GPUComputing@Sheffield http://gpucomputing.sites.sheffield.ac.uk/

  2. Overview Motivation Introduction to CUDA Kernels CUDA Memory Management

  3. Overview Motivation Introduction to CUDA Kernels CUDA Memory Management

  4. About NVIDIA CUDA • Traditional sequential languages are not suitable for GPUs • GPUs are data NOT task parallel • CUDA allows NVIDIA GPUs to be programmed in C/C++ (also in Fortran) • Language extensions for compute “kernels” • Kernels execute on multiple threads concurrently • API functions provide management (e.g. memory, synchronisation, etc.)

  5. DRAM GDRAM CPU NVIDIA GPU PCIe BUS Main Program Code ___________ _______ _________ GPU Kernel Code _________ _____ ______ GPU Kernel Code _________ _____ ______ GPU Kernel Code _________ _____ ______

  6. Stream Computing … • Data set decomposed into a stream of elements • A single computational function (kernel) operates on each element • A thread is the execution of a kernel on one data element • Multiple Streaming Multiprocessor Cores can operate on multiple elements in parallel • Many parallel threads • Suitable for Data Parallel problems

  7. Hardware Model SM GPU SM SM SM SM Shared Memory Device Memory • NVIDIA GPUs have a 2-level hierarchy • Each Streaming Multiprocessor has multiple cores • The number of SMs and cores per SM varies

  8. CUDA Software Model Grid Block Thread • Hardware abstracted as a Grid of Thread Blocks • Blocks map to SMs • Each thread maps onto a SM core • Don’t need to know the hardware characteristics • Oversubscribe and allow the hardware to perform scheduling • More blocks than SMs and more threads than cores • Code is portable across different GPU versions

  9. CUDA Vector Types • CUDA Introduces a new dim types. E.g. dim2, dim3, dim4 • dim3 contains a collection of three integers (X, Y, Z) dim3 my_xyz(x_value, y_value, z_value); • Values are accessed as members int x = my_xyz.x;

  10. Special dim3 Vectors Grid Block Thread • threadIdx • The location of a thread within a block. E.g. (2,1,0) • blockIdx • The location of a block within a grid. E.g. (1,0,0) • blockDim • The dimensions of the blocks. E.g. (3,9,1) • gridDim • The dimensions of the grid. E.g. (3,2,1) Idx values use zero indices, Dim values are a size

  11. Analogy • Students arrive at halls of residence to check in • Rooms allocated in order • Unfortunately admission rates are down! • Only half as many rooms as students • Each student can be moved from room i to room 2i so that no-one has a neighbour

  12. Serial Solution • Receptionist performs the following tasks • Asks each student their assigned room number • Works out their new room number • Informs them of their new room number

  13. Parallel Solution “Everybody check your room number. Multiply it by 2 and go to that room”

  14. Overview Motivation Introduction to CUDA Kernels CUDA Memory Management

  15. A Coded Example • Serial solution for (i=0;i<N;i++){ result[i] = 2*i; } • We can parallelise this by assigning each iteration to a CUDA thread!

  16. CUDA C Example: Device __global__void myKernel(int *result) { inti = threadIdx.x; result[i] = 2*i; } • Replace loop with a “kernel” • Use __global__ specifier to indicate it is GPU code • Use threadIdxdim variable to get a unique index • Assuming for simplicity we have only one block • Equivalent to your door number at CUDA Halls of Residence

  17. CUDA C Example: Host • Call the kernel by using the CUDA kernel launch syntax • kernel<<<GRID OF BLOCKS, BLOCK OF THREADS>>>(arguments); dim3 blocksPerGrid(1,1,1); //use only one block dim3 threadsPerBlock(N,1,1); //use N threads in the block myKernel<<<blocksPerGrid, threadsPerBlock>>>(result);

  18. CUDA C Example: Host • Only one block will give poor performance • a block gets allocated to a single SM! • Solution: Use multiple blocks dim3 blocksPerGrid(N/256,1,1); //assumes 256 divides N exactly dim3 threadsPerBlock(256,1,1); //256 threads in the block myKernel<<<blocksPerGrid, threadsPerBlock>>>(result);

  19. Vector Addition Example //Kernel Code __global__ void vectorAdd(float *a, float *b, float *c) { inti = blockIdx.x * blockDim.x + threadIdx.x; c[i] = a[i] + b[i]; } //Host Code ... dim3 blocksPerGrid(N/256,1,1); //assuming 256 divides N exactly dim3 threadsPerBlock(256,1,1); vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(a, b, c);

  20. A 2D Matrix Addition Example //Device Code __global__ void matrixAdd(float a[N][N], float b[N][N], float c[N][N]) { int j = blockIdx.x * blockDim.x + threadIdx.x; inti = blockIdx.y * blockDim.y + threadIdx.y; c[i][j] = a[i][j] + b[i][j]; } //Host Code ... dim3 blocksPerGrid(N/16,N/16,1); // (N/16)x(N/16) blocks/grid (2D) dim3 threadsPerBlock(16,16,1); // 16x16=256 threads/block (2D) matrixAdd<<<blocksPerGrid, threadsPerBlock>>>(a, b, c);

  21. Overview Motivation Introduction to CUDA Kernels CUDA Memory Management

  22. Memory Management • GPU has separate dedicated memory from the host CPU • Data accessed in kernels must be on GPU memory • Data must be explicitly copied and transferred • cudaMalloc()is used to allocate memory on the GPU • cudaFree()releases memory float *a; cudaMalloc(&a, N*sizeof(float)); ... cudaFree(a);

  23. Memory Copying • Once memory has been allocated we need to copy data to it and from it. • cudaMemcpy() transfers memory from the host to device to host and vice versa cudaMemcpy(array_device, array_host, N*sizeof(float), cudaMemcpyHostToDevice); cudaMemcpy(array_host, array_device, N*sizeof(float), cudaMemcpyDeviceToHost); • First argument is always the destination of transfer • Transfers are relatively slow and should be minimised where possible

  24. Synchronisation • Kernel calls are non-blocking • Host continues after kernel launch • Overlaps CPU and GPU execution • cudaThreadSynchronise() call be called from the host to block until GPU kernels have completed vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(a, b, c); //do work on host (that doesn’t depend on c) cudaThreadSynchronise(); //wait for kernel to finish • Standard cudaMemcpy calls are blocking • Non-blocking variants exist

  25. Synchronisation Between Threads • syncthreads() can be used within a kernel to synchronise between threads in a block • Threads in the same block can therefore communicate using a shared memory space if (threadIdx.x == 0) array[0]=x; syncthreads(); if (threadIdx.x == 1) x=array[0]; • It is NOT possible to synchronise between threads in different blocks • A kernel exit does however guarantee synchronisation

  26. Compiling a CUDA program CUDA C Code is compiled using nvcce.g. Will compile host AND device code to produce an executable nvcc –o example example.cu

  27. Summary • Traditional languages alone are not sufficient for programming GPUs • CUDA allows NVIDIA GPUs to be programmed using C/C++ • defines language extensions and APIs to enable this • We introduced the key CUDA concepts and gave examples • Kernels for the device • Host API for memory management and kernel launching Now lets try it out…

More Related