1 / 12

Performance Modeling in GPGPU

Performance Modeling in GPGPU. By Arun Bhandari Course: HPC Date: 01/28/12. GPU (Graphics Processing Unit). High performance many core processors Only used to accelerate certain parts of the graphics pipeline. Performance pushed by game industry. Introduction to GPGPU.

kiral
Télécharger la présentation

Performance Modeling in GPGPU

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Modeling in GPGPU By ArunBhandari Course: HPC Date: 01/28/12

  2. GPU (Graphics Processing Unit) • High performance many core processors • Only used to accelerate certain parts of the graphics pipeline. • Performance pushed by game industry

  3. Introduction to GPGPU • Stands for General Purpose Graphics Processing Unit • Also called GPU computing • Use of GPU to do general purpose computing •  Heterogeneous co-processing computing model

  4. Why GPU for computing? • GPU is fast • Massively parallel • High memory Bandwidth • Programmable NVIDIA CUDA, OpenCL • Inexpensive desktop supercomputing NVIDIA Tesla C1060 : ~1 TFLOPS @ $1000

  5. GPU vs CPU (Computation)

  6. GPU vs CPU (Bandwidth)

  7. Applications of GPGPU • MATLAB • Statistical physics • Audio Signal processing • Speech processing • Digital image processing • Video processing • Geometric computing • Weather forecasting • Climate research • Bioinformatics • Medical imaging • Database operations • Molecular modeling • Control engineering • Electronic design automation And many more……

  8. Programming Models • Data-parallel processing • High arithmetic intensity • Coherent data access • GPU programming languages • NVIDIA CUDA • OpenCL

  9. CUDA vsOpenCL • Conceptually identical work-item = thread work-group = block • Similar memory model • Global, local, shared memory • Kernel , host program • CUDA, highly optimized for NVIDIA GPUs • OpenCL can be widely used for any GPUs/ CPUs.

  10. GPU Optimization • Maximize parallel execution • Use large inputs • Minimize device to host memory overhead • Avoid shared memory bank conflict • Use less expensive operators • Division : 32 cycles, multiplication : 4 cycles • *0.5 instead of /2.0

  11. Conclusions • GPU computing delivers high performance • Many scientific computing problems are parallelizable • Future of today’s technology Issues • Every problem is not suitable for GPUs • Unclear future performance growth of GPU hardware

  12. Questions ???

More Related