An Efficient Block Assignment Policy in Hadoop Distributed File System for Multimedia Data Processing

Cheolgi KIM  Daechul LEE  Jaehyun LEE  Jaehwan LEE  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E102-D   No.8   pp.1569-1571
Publication Date: 2019/08/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDL8016
Type of Manuscript: LETTER
Category: Computer System
Keyword: 
Hadoop,  Hadoop distributed file system,  video processing,  group of pictures,  

Full Text: PDF(510.1KB)>>
Buy this Article




Summary: 
Hadoop, a distributed processing framework for big-data, is now widely used for multimedia processing. However, when processing video data from a Hadoop distributed file system (HDFS), unnecessary network traffic is generated due to an inefficient HDFS block slice policy for picture frames in video files. We propose a new block replication policy to solve this problem and compare the newly proposed HDFS with the original HDFS via extensive experiments. The proposed HDFS reduces network traffic, and increases locality between processing cores and file locations.