From Easy to Difficult: A Self-Paced Multi-Task Joint Sparse Representation Method

Lihua GUO  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E101-D   No.8   pp.2115-2122
Publication Date: 2018/08/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2017EDP7289
Type of Manuscript: PAPER
Category: Image Recognition, Computer Vision
Keyword: 
multi-task learning,  sparse representation,  self-paced learning,  curriculum learning,  

Full Text: PDF(354.7KB)
>>Buy this Article


Summary: 
Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.