Multi Information Fusion Network for Saliency Quality Assessment

Kai TAN  Qingbo WU  Fanman MENG  Linfeng XU  

IEICE TRANSACTIONS on Information and Systems   Vol.E102-D   No.5   pp.1111-1114
Publication Date: 2019/05/01
Publicized: 2019/02/26
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDL8002
Type of Manuscript: LETTER
Category: Image Recognition, Computer Vision
saliency quality assessment,  multi information,  deep convolutional neural network,  image content,  

Full Text: PDF(354.9KB)>>
Buy this Article

Saliency quality assessment aims at estimating the objective quality of a saliency map without access to the ground-truth. Existing works typically evaluate saliency quality by utilizing information from saliency maps to assess its compactness and closedness while ignoring the information from image content which can be used to assess the consistence and completeness of foreground. In this letter, we propose a novel multi-information fusion network to capture the information from both the saliency map and image content. The key idea is to introduce a siamese module to collect information from foreground and background, aiming to assess the consistence and completeness of foreground and the difference between foreground and background. Experiments demonstrate that by incorporating image content information, the performance of the proposed method is significantly boosted. Furthermore, we validate our method on two applications: saliency detection and segmentation. Our method is utilized to choose optimal saliency map from a set of candidate saliency maps, and the selected saliency map is feeded into an segmentation algorithm to generate a segmentation map. Experimental results verify the effectiveness of our method.