1 / 18

Improving GPU Performance via Large Warps and Two-Level Warp Scheduling

Improving GPU Performance via Large Warps and Two-Level Warp Scheduling. Veynu Narasiman The University of Texas at Austin. Michael Shebanow NVIDIA. Chang Joo Lee Intel. Rustam Miftakhutdinov The University of Texas at Austin. Onur Mutlu Carnegie Mellon University. Yale N. Patt

Télécharger la présentation

Improving GPU Performance via Large Warps and Two-Level Warp Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving GPU Performance viaLarge Warps and Two-Level Warp Scheduling Veynu Narasiman The University of Texas at Austin Michael Shebanow NVIDIA Chang Joo Lee Intel Rustam Miftakhutdinov The University of Texas at Austin Onur Mutlu Carnegie Mellon University Yale N. Patt The University of Texas at Austin MICRO-44 December 6th, 2011 Porto Alegre, Brazil

  2. Rise of GPU Computing • GPUs have become a popular platform for general purpose applications • New Programming Models • CUDA • ATI Stream Technology • OpenCL • Order of magnitude speedup over single-threaded CPU

  3. How GPUs Exploit Parallelism • Multiple GPU cores (i.e., Streaming Multiprocessors) • Focus on a single GPU core • Exploit parallelism in 2 major ways: • Threads grouped into warps • Single PC per warp • Warps executed in SIMD fashion • Multiple warps concurrently executed • Round-robin scheduling • Helps hide long latencies

  4. The Problem • Despite these techniques, computational resources can still be underutilized • Two reasons for this: • Branch divergence • Long latency operations

  5. Branch Divergence Current PC: A B 1111 Current Active Mask: 1111 1001 A Taken Not Taken 1001 0110 B C D 0110 C D 1111 D 1111 D Active Mask Reconverge PC Execute PC

  6. Long Latency Operations Core All Warps Compute All Warps Compute Req Warp 0 Memory System Req Warp 1 Req Warp 15 time Round Robin Scheduling, 16 total warps

  7. Good Bad 32 warps, 32 threads per warp, SIMD width = 32, round-robin scheduling

  8. Large Warp Microarchitecture (LWM) • Alleviates branch divergence • Fewer, but larger warps • Warp size much greater than SIMD width • Total thread count and SIMD-width stay the same • Dynamically breaks down large warp into sub-warps • Can be executed on existing SIMD pipeline • Rearrange active mask as 2D structure • Number of columns = SIMD width • Search each column for an active thread to create new sub-warp

  9. Large Warp Microarchitecture Example Decode Stage 0 1 0 0 1 0 0 1 0 0 0 Sub-warp 1 mask Sub-warp 2 mask Sub-warp 0 mask Sub-warp 1 mask Sub-warp 0 mask Sub-warp 0 mask 0 0 0 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0

  10. More Large Warp Microarchitecture • Divergence stack still used • Handled at the large warp level • How large should we make the warps? • More threads per warp  more potential for sub-warp creation • Too large a warp size can degrade performance • Re-fetch policy for conditional branches • Must wait till last sub-warp finishes • Optimization for unconditional branch instructions • Don’t create multiple sub-warps • Sub-warping always completes in a single cycle

  11. Two Level Round Robin Scheduling • Split warps into equal sized fetch groups • Create initial priority among the fetch groups • Round-robin scheduling among warps in same fetch group • When all warps in the highest priority fetch group are stalled • Rotate fetch group priorities • Highest priority fetch group becomes least • Warps arrive at a stalling point at slightly different points in time • Better overlap of computation and memory latency

  12. Round Robin vs Two Level Round Robin Core All Warps Compute All Warps Compute Req Warp 0 Memory System Req Warp 1 Req Warp 15 time Round Robin Scheduling, 16 total warps Group 0 Group 1 Group 0 Group 1 Core Compute Compute Compute Compute Saved Cycles Req Warp 0 Req Warp 1 Req Warp 7 Memory System Req Warp 8 Req Warp 9 Req Warp 15 time Two Level Round Robin Scheduling, 2 fetch groups, 8 warps each

  13. More on Two Level Scheduling • What should the fetch group size be? • Enough warps to keep pipeline busy in the absence of long latency stalls • Too small • Uneven progression of warps in the same fetch group • Destroys data locality among warps • Too large • Reduces benefits of two-level scheduling • More warps stall at the same time • Not just for hiding memory latency • Complex instructions (e.g., sine, cosine, sqrt, etc.) • Two-level scheduling allows warps to arrive at such instructions at slightly different points in time

  14. Combining LWM and Two Level Scheduling • 4 large warps, 256 threads each • Fetch group size = 1 large warp • Problematic for applications with few long latency stalls • No stalls  no fetch group priority changes • Single large warp starved • Branch re-fetch policy for large warps  bubbles in pipeline • Timeout invoked fetch group priority change • 32K instruction timeout period • Alleviates starvation

  15. Methodology Simulate single GPU core with 1024 thread contextsdivided into 32 warps each with 32 threads

  16. Overall IPC Results LWM+2Lev improves performance by 19.1% over baseline and by 11.5% over TBC

  17. IPC and Computational Resource Utilization

  18. Conclusion • For maximum performance, the computational resources on GPUs must be effectively utilized • Branch divergence and long latency operations cause them to be underutilized or unused • We proposed two mechanism to alleviate this • Large Warp Microarchitecture for branch divergence • Two-level scheduling for long latency operations • Improves performance by 19.1% over traditional GPU cores • Increases scope of applications that can run efficiently on a GPU • Questions

More Related