A Novel Video Retrieval Method Based on Web Community Extraction Using Features of Video Materials

Yasutaka HATAKEYAMA  Takahiro OGAWA  Satoshi ASAMIZU  Miki HASEYAMA  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E92-A   No.8   pp.1961-1969
Publication Date: 2009/08/01
Online ISSN: 1745-1337
DOI: 10.1587/transfun.E92.A.1961
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Signal Processing)
Category: Image
Keyword: 
video retrieval,  canonical correlation analysis,  link analysis,  Web community extraction,  

Full Text: PDF>>
Buy this Article




Summary: 
A novel video retrieval method based on Web community extraction using audio and visual features and textual features of video materials is proposed in this paper. In this proposed method, canonical correlation analysis is applied to these three features calculated from video materials and their Web pages, and transformation of each feature into the same variate space is possible. The transformed variates are based on the relationships between visual, audio and textual features of video materials, and the similarity between video materials in the same feature space for each feature can be calculated. Next, the proposed method introduces the obtained similarities of video materials into the link relationship between their Web pages. Furthermore, by performing link analysis of the obtained weighted link relationship, this approach extracts Web communities including similar topics and provides the degree of attribution of video materials in each Web community for each feature. Therefore, by calculating similarities of the degrees of attribution between the Web communities extracted from the three kinds of features, the desired ones are automatically selected. Consequently, by monitoring the degrees of attribution of the obtained Web communities, the proposed method can perform effective video retrieval. Some experimental results obtained by applying the proposed method to video materials obtained from actual Web pages are shown to verify the effectiveness of the proposed method.