For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
On the Effects of Domain Size and Complexity in Empirical Distribution of Reinforcement Learning
Kazunori IWATA Kazushi IKEDA Hideaki SAKAI
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2005/01/01
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Artificial Intelligence and Cognitive Science
reinforcement learning, Markov decision process, Lempel-Ziv coding, domain size, stochastic complexity,
Full Text: PDF(329.2KB)
>>Buy this Article
We regard the events of a Markov decision process as the outputs from a Markov information source in order to analyze the randomness of an empirical sequence by the codeword length of the sequence. The randomness is an important viewpoint in reinforcement learning since the learning is to eliminate the randomness and to find an optimal policy. The occurrence of optimal empirical sequence also depends on the randomness. We then introduce the Lempel-Ziv coding for measuring the randomness which consists of the domain size and the stochastic complexity. In experimental results, we confirm that the learning and the occurrence of optimal empirical sequence depend on the randomness and show the fact that in early stages the randomness is mainly characterized by the domain size and as the number of time steps increases the randomness depends greatly on the complexity of Markov decision processes.