Skeletons and Asynchronous RPC for Embedded Data and Task Parallel Image Processing


IEICE TRANSACTIONS on Information and Systems   Vol.E89-D   No.7   pp.2036-2043
Publication Date: 2006/07/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e89-d.7.2036
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Section on Machine Vision Applications)
Category: Parallel and Distributed Computing
design space exploration,  heterogeneous architectures,  constrained architectures,  algorithmic skeletons,  remote procedure call,  futures,  run-time scheduling,  

Full Text: PDF(295KB)>>
Buy this Article

Developing embedded parallel image processing applications is usually a very hardware-dependent process, often using the single instruction multiple data (SIMD) paradigm, and requiring deep knowledge of the processors used. Furthermore, the application is tailored to a specific hardware platform, and if the chosen hardware does not meet the requirements, it must be rewritten for a new platform. We have proposed the use of design space exploration [9] to find the most suitable hardware platform for a certain application. This requires a hardware-independent program, and we use algorithmic skeletons [5] to achieve this, while exploiting the data parallelism inherent to low-level image processing. However, since different operations run best on different kinds of processors, we need to exploit task parallelism as well. This paper describes how we exploit task parallelism using an asynchronous remote procedure call (RPC) system, optimized for low-memory and sparsely connected systems such as smart cameras. It uses a futures [16]-like model to present a normal imperative C-interface to the user in which the skeleton calls are implicitly parallelized and pipelined. Simulation provides the task dependency graph and performance numbers for the mapping, which can be done at run time to facilitate data dependent branching. The result is an easy to program, platform independent framework which shields the user from the parallel implementation and mapping of his application, while efficiently utilizing on-chip memory and interconnect bandwidth.