Towards Cooperation and Fairness in Multi-Agent Reinforcement Learning

Published in CoCoMARL Workshop @RLC, 2024

Authors: Jasmine Jerry Aloor, Siddharth Nayak, Sydney Dolan, Victor L Qin, Hamsa Balakrishnan

We consider the problem of fair multi-agent navigation for a large group of decentralized agents using multi-agent reinforcement learning (MARL). Previous MARL research optimizes for efficiency, safety, or both. We introduce a methodology that considers fairness over the entirety of the simulation period using a specialized reward function and decentralized goal assignment approach. Specifically, our work (1) incorporates fairness into the reward function, which leads to (2) agents learning a fair assignment of goals and (3) achieves almost perfect goal coverage in the navigation scenario using each agent’s localized observations in a decentralized manner that allows scaling to any number of agents. We show that our work achieves higher fairness metrics when compared to other centralized goal-assignment fairness methodologies. [PDF], [Poster], [Website], [Code]

Leave a Comment