Exploiting the Task-Pipelined Parallelism of Stream Programs on Many-Core GPUs

Shuai MU  Dongdong LI  Yubei CHEN  Yangdong DENG  Zhihua WANG  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E96-D   No.10   pp.2194-2207
Publication Date: 2013/10/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E96.D.2194
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Computer System
Keyword: 
GPU,  task-pipeline,  dynamic scheduling,  load balance,  L2 cache,  

Full Text: PDF(2.2MB)>>
Buy this Article




Summary: 
By exploiting data-level parallelism, Graphics Processing Units (GPUs) have become a high-throughput, general purpose computing platform. Many real-world applications especially those following a stream processing pattern, however, feature interleaved task-pipelined and data parallelism. Current GPUs are ill equipped for such applications due to the insufficient usage of computing resources and/or the excessive off-chip memory traffic. In this paper, we focus on microarchitectural enhancements to enable task-pipelined execution of data-parallel kernels on GPUs. We propose an efficient adaptive dynamic scheduling mechanism and a moderately modified L2 design. With minor hardware overhead, our techniques orchestrate both task-pipeline and data parallelisms in a unified manner. Simulation results derived by a cycle-accurate simulator on real-world applications prove that the proposed GPU microarchitecture improves the computing throughput by 18% and reduces the overall accesses to off-chip GPU memory by 13%.