For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Top-Down Visual Attention Estimation Using Spatially Localized Activation Based on Linear Separability of Visual Features
Takatsugu HIRAYAMA Toshiya OHIRA Kenji MASE
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2015/12/01
Online ISSN: 1745-1361
Type of Manuscript: PAPER
Category: Image Recognition, Computer Vision
human visual attention, visual search, saliency map, activation map, linear separability,
Full Text: PDF(2.4MB)>>
Intelligent information systems captivate people's attention. Examples of such systems include driving support vehicles capable of sensing driver state and communication robots capable of interacting with humans. Modeling how people search visual information is indispensable for designing these kinds of systems. In this paper, we focus on human visual attention, which is closely related to visual search behavior. We propose a computational model to estimate human visual attention while carrying out a visual target search task. Existing models estimate visual attention using the ratio between a representative value of visual feature of a target stimulus and that of distractors or background. The models, however, can not often achieve a better performance for difficult search tasks that require a sequentially spotlighting process. For such tasks, the linear separability effect of a visual feature distribution should be considered. Hence, we introduce this effect to spatially localized activation. Concretely, our top-down model estimates target-specific visual attention using Fisher's variance ratio between a visual feature distribution of a local region in the field of view and that of a target stimulus. We confirm the effectiveness of our computational model through a visual search experiment.