Multi-Feature Fusion Network for Salient Region Detection

Zheng FANG  Tieyong CAO  Jibin YANG  Meng SUN  

IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E102-A    No.6    pp.834-841
Publication Date: 2019/06/01
Online ISSN: 1745-1337
DOI: 10.1587/transfun.E102.A.834
Type of Manuscript: PAPER
Category: Image
salient region detection,  multi-feature fusion network,  feature extraction,  dense block,  end-to-end,  

Full Text: PDF(3.3MB)>>
Buy this Article

Salient region detection is a fundamental problem in computer vision and image processing. Deep learning models perform better than traditional approaches but suffer from their huge parameters and slow speeds. To handle these problems, in this paper we propose the multi-feature fusion network (MFFN) - a efficient salient region detection architecture based on Convolution Neural Network (CNN). A novel feature extraction structure is designed to obtain feature maps from CNN. A fusion dense block is used to fuse all low-level and high-level feature maps to derive salient region results. MFFN is an end-to-end architecture which does not need any post-processing procedures. Experiments on the benchmark datasets demonstrate that MFFN achieves the state-of-the-art performance on salient region detection and requires much less parameters and computation time. Ablation experiments demonstrate the effectiveness of each module in MFFN.