For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Incremental Estimation of Natural Policy Gradient with Relative Importance Weighting
Ryo IWAKI Hiroki YOKOYAMA Minoru ASADA
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2018/09/01
Online ISSN: 1745-1361
Type of Manuscript: PAPER
Category: Artificial Intelligence, Data Mining
reinforcement learning, natural policy gradient, adaptive step size,
Full Text: PDF(3.1MB)
>>Buy this Article
The step size is a parameter of fundamental importance in learning algorithms, particularly for the natural policy gradient (NPG) methods. We derive an upper bound for the step size in an incremental NPG estimation, and propose an adaptive step size to implement the derived upper bound. The proposed adaptive step size guarantees that an updated parameter does not overshoot the target, which is achieved by weighting the learning samples according to their relative importances. We also provide tight upper and lower bounds for the step size, though they are not suitable for the incremental learning. We confirm the usefulness of the proposed step size using the classical benchmarks. To the best of our knowledge, this is the first adaptive step size method for NPG estimation.