|
For Full-Text PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
|
Cross-Domain Deep Feature Combination for Bird Species Classification with Audio-Visual Data
Naranchimeg BOLD Chao ZHANG Takuya AKASHI
Publication
IEICE TRANSACTIONS on Information and Systems
Vol.E102-D
No.10
pp.2033-2042 Publication Date: 2019/10/01 Publicized: 2019/06/27 Online ISSN: 1745-1361
DOI: 10.1587/transinf.2018EDP7383 Type of Manuscript: PAPER Category: Multimedia Pattern Processing Keyword: bird species classification, multimodal learning, feature combination, spectrogram feature, convolutional neural networks,
Full Text: PDF(3.2MB)>>
Summary:
In recent decade, many state-of-the-art algorithms on image classification as well as audio classification have achieved noticeable successes with the development of deep convolutional neural network (CNN). However, most of the works only exploit single type of training data. In this paper, we present a study on classifying bird species by exploiting the combination of both visual (images) and audio (sounds) data using CNN, which has been sparsely treated so far. Specifically, we propose CNN-based multimodal learning models in three types of fusion strategies (early, middle, late) to settle the issues of combining training data cross domains. The advantage of our proposed method lies on the fact that we can utilize CNN not only to extract features from image and audio data (spectrogram) but also to combine the features across modalities. In the experiment, we train and evaluate the network structure on a comprehensive CUB-200-2011 standard data set combing our originally collected audio data set with respect to the data species. We observe that a model which utilizes the combination of both data outperforms models trained with only an either type of data. We also show that transfer learning can significantly increase the classification performance.
|
|