Domain Adaptive Cross-Modal Image Retrieval via Modality and Domain Translations

Rintaro YANAGI  Ren TOGO  Takahiro OGAWA  Miki HASEYAMA  

IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E104-A   No.6   pp.866-875
Publication Date: 2021/06/01
Publicized: 2020/11/30
Online ISSN: 1745-1337
DOI: 10.1587/transfun.2020IMP0011
Type of Manuscript: Special Section PAPER (Special Section on Image Media Quality)
cross-modal retrieval,  text-to-image generative adversarial network,  style transfer,  domain adaptation,  

Full Text: PDF>>
Buy this Article

Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.