Dynamic Sample Selection: Theory

Peter GECZY
Shiro USUI

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E81-A       pp.1931-1939
Publication Date: 1998/09/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8508
Type of Manuscript: Category: Neural Networks
Keyword: 
dynamic sample selection,  first order optimization techniques,  search direction,  convergence speed,  

Full Text: PDF>>
Buy this Article



Summary: 
Conventional approaches to neural network training do not consider possibility of selecting training samples dynamically during the learning phase. Neural network is simply presented with the complete training set at each iteration of the learning. The learning can then become very costly for large data sets. Huge redundancy of data samples may lead to the ill-conditioned training problem. Ill-conditioning during the training causes rank-deficiencies of error and Jacobean matrices, which results in slower convergence speed, or in the worst case, the failure of the algorithm to progress. Rank-deficiencies of essential matrices can be avoided by an appropriate selection of training exemplars at each iteration of training. This article presents underlying theoretical grounds for dynamic sample selection (DSS), that is mechanism enabling to select a subset of training set at each iteration. Theoretical material is first presented for general objective functions, and then for the objective functions satisfying the Lipschitz continuity condition. Furthermore, implementation specifics of DSS to first order line search techniques are theoretically described.