Maintaining System State Information in a Multiagent Environment for Effective Learning

Gang CHEN  Zhonghua YANG  Hao HE  Kiah-Mok GOH  

IEICE TRANSACTIONS on Information and Systems   Vol.E88-D   No.1   pp.127-134
Publication Date: 2005/01/01
Online ISSN: 
DOI: 10.1093/ietisy/e88-d.1.127
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Distributed Cooperation and Agents
multiagent system,  system state,  distributed protocol,  token ring,  

Full Text: PDF(537KB)>>
Buy this Article

One fundamental issue in multiagent reinforcement learning is how to deal with the limited local knowledge of an agent in order to achieve effective learning. In this paper, we argue that this issue can be more effectively solved if agents are equipped with a consistent global view. We achieve this by requiring agents to follow an interacting protocol. The properties of the protocol are derived and theoretically analyzed. A distributed protocol that satisfies these properties is presented. The experimental evaluations are conducted for a well-known test-case (i.e., pursuit game) in the context of two learning algorithms. The results show that the protocol is effective and the reinforcement learning algorithms using it perform much better.