
For FullText PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.

A GPU Implementation of Dynamic Programming for the Optimal Polygon Triangulation
Yasuaki ITO Koji NAKANO
Publication
IEICE TRANSACTIONS on Information and Systems
Vol.E96D
No.12
pp.25962603 Publication Date: 2013/12/01
Online ISSN: 17451361
DOI: 10.1587/transinf.E96.D.2596
Print ISSN: 09168532 Type of Manuscript: Special Section PAPER (Special Section on Parallel and Distributed Computing and Networking) Category: Keyword: dynamic programming, parallel algorithms, coalesced memory access, GPGPU, CUDA,
Full Text: PDF(439.3KB)>>
Summary:
This paper presents a GPU (Graphics Processing Units) implementation of dynamic programming for the optimal polygon triangulation. Recently, GPUs can be used for general purpose parallel computation. Users can develop parallel programs running on GPUs using programming architecture called CUDA (Compute Unified Device Architecture) provided by NVIDIA. The optimal polygon triangulation problem for a convex polygon is an optimization problem to find a triangulation with minimum total weight. It is known that this problem for a convex ngon can be solved using the dynamic programming technique in O(n^{3}) time using a work space of size O(n^{2}). In this paper, we propose an efficient parallel implementation of this O(n^{3})time algorithm on the GPU. In our implementation, we have used two new ideas to accelerate the dynamic programming. The first idea (adaptive granularity) is to partition the dynamic programming algorithm into many sequential kernel calls of CUDA, and to select the best parameters for the size and the number of blocks for each kernel call. The second idea (sliding and mirroring arrangements) is to arrange the working data for coalesced access of the global memory in the GPU to minimize the memory access overhead. Our implementation using these two ideas solves the optimal polygon triangulation problem for a convex 8192gon in 5.57 seconds on the NVIDIA GeForce GTX 680, while a conventional CPU implementation runs in 1939.02 seconds. Thus, our GPU implementation attains a speedup factor of 348.02.

