Corrected Stochastic Dual Coordinate Ascent for Top-k SVM

Yoshihiro HIROHASHI  Tsuyoshi KATO  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E103-D   No.11   pp.2323-2331
Publication Date: 2020/11/01
Publicized: 2020/08/06
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDP7261
Type of Manuscript: PAPER
Category: Pattern Recognition
Keyword: 
top-k SVM,  empirical risk minimization,  convex optimization,  stochastic dual coordinate ascent,  

Full Text: PDF(593.3KB)>>
Buy this Article




Summary: 
Currently, the top-k error ratio is one of the primary methods to measure the accuracy of multi-category classification. Top-k multiclass SVM was designed to minimize the empirical risk based on the top-k error ratio. Two SDCA-based algorithms exist for learning the top-k SVM, both of which have several desirable properties for achieving optimization. However, both algorithms suffer from a serious disadvantage, that is, they cannot attain the optimal convergence in most cases owing to their theoretical imperfections. As demonstrated through numerical simulations, if the modified SDCA algorithm is employed, optimal convergence is always achieved, in contrast to the failure of the two existing SDCA-based algorithms. Finally, our analytical results are presented to clarify the significance of these existing algorithms.