Hierarchical Progressive Trust Model for Mismatch Removal under Both Rigid and Non-Rigid Transformations

Songlin DU  Takeshi IKENAGA  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E101-A   No.11   pp.1786-1794
Publication Date: 2018/11/01
Online ISSN: 1745-1337
DOI: 10.1587/transfun.E101.A.1786
Type of Manuscript: Special Section PAPER (Special Section on Smart Multimedia & Communication Systems)
Category: Image, Vision
Keyword: 
visual correspondence,  image matching,  mismatch removal,  hierarchical progressive trust,  

Full Text: PDF(3.5MB)
>>Buy this Article


Summary: 
Accurate visual correspondence is the foundation of many computer vision based applications. Since existing image matching algorithms generate mismatches inevitably, a reliable mismatch-removal algorithm is highly desired to remove mismatches and preserve true matches. This paper proposes a hierarchical progressive trust (HPT) model to solve this problem. The HPT model first adopts a “trust the most trustworthy ones” strategy to select anchor inliers in its bottom layer, and then progressively propagates the trust from bottom layer to other layers in a bottom-up way: 1) bottom layer verifies anchor inliers with the guidance of local features; 2) middle layers progressively estimate local transformations and perform local verifications; 3) top layer estimates a global transformation with an anchor-inliers-guided expectation maximization (EM) algorithm and performs global verifications. Experimental results show that the proposed HPT model achieves higher performance than state-of-the-art mismatch-removal methods under both rigid transformations and non-rigid deformations.