Header menu link for other important links
X
Reinforcement learning: Exploration-exploitation dilemma in multi-agent foraging task
M. Yogeswaran,
Published in
2012
Volume: 49
   
Issue: 3
Pages: 223 - 236
Abstract
The exploration-exploitation dilemma has been an unresolved issue within the framework of multi-agent reinforcement learning. The agents have to explore in order to improve the state which potentially yields higher rewards in the future or exploit the state that yields the highest reward based on the existing knowledge. Pure exploration degrades the agent's learning but increases the flexibility of the agent to adapt in a dynamic environment. On the other hand pure exploitation drives the agent's learning process to locally optimal solutions. Various learning policies have been studied to address this issue. This paper presents critical experimental results on a number of learning policies reported in the open literatures. Learning policies namely greedy, ζ - greedy, Boltzmann Distribution (BD), Simulated Annealing (SA), Probability Matching (PM) and Optimistic Initial Values (OIV) are implemented to study on their performances on a multi-agent foraging-task modelled. Based on the numerical results that were obtained, the performances of the learning policies are discussed. © Operational Research Society of India 2012.
About the journal
JournalOPSEARCH
ISSN00303887