Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array

Yuichi TAGUCHI  Keita TAKAHASHI  Takeshi NAEMURA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E92-D   No.7   pp.1442-1452
Publication Date: 2009/07/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E92.D.1442
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Image Processing and Video Processing
Keyword: 
real-time video-based rendering,  light field,  camera array,  depth estimation,  GPGPU,  

Full Text: PDF(2.1MB)>>
Buy this Article




Summary: 
We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.