Multimodal Learning of Geometry-Preserving Binary Codes for Semantic Image Retrieval

Go IRIE  Hiroyuki ARAI  Yukinobu TANIGUCHI  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E100-D   No.4   pp.600-609
Publication Date: 2017/04/01
Online ISSN: 1745-1361
Type of Manuscript: INVITED PAPER (Special Section on Award-winning Papers)
Category: 
Keyword: 
image retrieval,  multimodal learning,  binary coding,  

Full Text: FreePDF(572.9KB)


Summary: 
This paper presents an unsupervised approach to feature binary coding for efficient semantic image retrieval. Although the majority of the existing methods aim to preserve neighborhood structures of the feature space, semantically similar images are not always in such neighbors but are rather distributed in non-linear low-dimensional manifolds. Moreover, images are rarely alone on the Internet and are often surrounded by text data such as tags, attributes, and captions, which tend to carry rich semantic information about the images. On the basis of these observations, the approach presented in this paper aims at learning binary codes for semantic image retrieval using multimodal information sources while preserving the essential low-dimensional structures of the data distributions in the Hamming space. Specifically, after finding the low-dimensional structures of the data by using an unsupervised sparse coding technique, our approach learns a set of linear projections for binary coding by solving an optimization problem which is designed to jointly preserve the extracted data structures and multimodal data correlations between images and texts in the Hamming space as much as possible. We show that the joint optimization problem can readily be transformed into a generalized eigenproblem that can be efficiently solved. Extensive experiments demonstrate that our method yields significant performance gains over several existing methods.