For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Improved Edge Boxes with Object Saliency and Location Awards
Peijiang KUANG Zhiheng ZHOU Dongcheng WU
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2016/02/01
Online ISSN: 1745-1361
Type of Manuscript: PAPER
Category: Image Recognition, Computer Vision
detection proposals, saliency, object location, Edge Boxes,
Full Text: PDF(933.3KB)>>
Recently, object-proposal methods have attracted more and more attention of scholars and researchers for its utility in avoiding exhaustive sliding window search in an image. Object-proposal method is inspired by a concept that objects share a common feature. There exist many object-proposal methods which are either in segmentation fashion or engineering categories depending on low-level feature. Among those object-proposal methods, Edge Boxes, which is based on the number of contours that a bounding box wholly contains, has the state of art performance. Since Edge Boxes sometimes misses proposing some obvious objects in some images, we propose an appropriate version of it based on our two observations. We call the appropriate version as Improved Edge Boxes. The first of our observations is that objects have a property which can help us distinguish them from the background. It is called object saliency. An appropriate way we employ to calculate object saliency can help to retrieve some objects. The second of our observations is that objects ‘prefer’ to appear at the center part of images. For this reason, a bounding box that appears at the center part of the image is likely to contain an object. These two observations are going to help us retrieve more objects while promoting the recall performance. Finally, our results show that given just 5000 proposals we achieve over 89% object recall but 87% in Edge Boxes at the challenging overlap threshold of 0.7. Further, we compare our approach to some state-of-the-art approaches to show that our results are more accurate and faster than those approaches. In the end, some comparative pictures are shown to indicate intuitively that our approach can find more objects and more accurate objects than Edge Boxes.