Dynamic Sample Selection: Implementation

Peter GECZY
Shiro USUI

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E81-A       pp.1940-1947
Publication Date: 1998/09/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8508
Type of Manuscript: Category: Neural Networks
Keyword: 
dynamic sample selection,  first order optimization techniques,  search direction,  convergence speed,  

Full Text: PDF>>
Buy this Article



Summary: 
Computational expensiveness of the training techniques, due to the extensiveness of the data set, is among the most important factors in machine learning and neural networks. Oversized data set may cause rank-deficiencies of Jacobean matrix which plays essential role in training techniques. Then the training becomes not only computationally expensive but also ineffective. In [1] the authors introduced the theoretical grounds for dynamic sample selection having a potential of eliminating rank-deficiencies. This study addresses the implementation issues of the dynamic sample selection based on the theoretical material presented in [1]. The authors propose a sample selection algorithm implementable into an arbitrary optimization technique. An ability of the algorithm to select a proper set of samples at each iteration of the training has been observed to be very beneficial as indicated by several experiments. Recently proposed approaches to sample selection work reasonably well if pattern-weight ratio is close to 1. Small improvements can be detected also at the values of the pattern-weight ratio equal to 2 or 3. The dynamic sample selection approach, presented in this article, can increase the convergence speed of first order optimization techniques, used for training MLP networks, even at the value of the pattern-weight ratio (E-FP) as high as 15 and possibly even more.