You are in the accessibility menu

Please use this identifier to cite or link to this item: http://acervodigital.unesp.br/handle/11449/8273
Title: 
Experience generalization for multi-agent reinforcement learning
Author(s): 
Institution: 
Universidade Estadual Paulista (UNESP)
Abstract: 
On-line learning methods have been applied successfully in multi-agent systems to achieve coordination among agents. Learning in multi-agent systems implies in a non-stationary scenario perceived by the agents, since the behavior of other agents may change as they simultaneously learn how to improve their actions. Non-stationary scenarios can be modeled as Markov Games, which can be solved using the Minimax-Q algorithm a combination of Q-learning (a Reinforcement Learning (RL) algorithm which directly learns an optimal control policy) and the Minimax algorithm. However, finding optimal control policies using any RL algorithm (Q-learning and Minimax-Q included) can be very time consuming. Trying to improve the learning time of Q-learning, we considered the QS-algorithm. in which a single experience can update more than a single action value by using a spreading function. In this paper, we contribute a Minimax-QS algorithm which combines the Minimax-Q algorithm and the QS-algorithm. We conduct a series of empirical evaluation of the algorithm in a simplified simulator of the soccer domain. We show that even using a very simple domain-dependent spreading function, the performance of the learning algorithm can be improved.
Issue Date: 
1-Jan-2001
Citation: 
Sccc 2001: Xxi International Conference of the Chilean Computer Science Society, Proceedings. Los Alamitos: IEEE Computer Soc, p. 233-239, 2001.
Time Duration: 
233-239
Publisher: 
Institute of Electrical and Electronics Engineers (IEEE), Computer Soc
Source: 
http://dx.doi.org/10.1109/SCCC.2001.972652
URI: 
http://hdl.handle.net/11449/8273
Access Rights: 
Acesso restrito
Type: 
outro
Source:
http://repositorio.unesp.br/handle/11449/8273
Appears in Collections:Artigos, TCCs, Teses e Dissertações da Unesp

There are no files associated with this item.
 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.