NASH EQUILIBRIUM WITH REINFORCEMENT LEARNING FOR SCALABLE MULTI-ROBOT COORDINATION

Authors

  • Tahar BRAHIMI Amar Telidji University Author
  • Atallah BENALIA Amar Telidji University Author
  • Iyad AMEUR Amar Telidji University Author

Keywords:

Multi‐Robot Systems; Nash Equilibrium; Reinforcement Learning; Formation Control; Obstacle Avoidance; Decentralized Coordination

Abstract

This study proposes a decentralized game‐theoretic framework for real‐time formation navigation in multi‐robot teams by integrating Nash equilibrium strategies with reinforcement learning to handle dynamic, constrained, and uncertain environments. Each robot independently minimizes a multi‐objective cost, encompassing collision avoidance, formation maintenance, and target tracking, while a learned policy adapts to changing conditions. Rigorous analysis demonstrates that the hybrid control algorithm converges asymptotically to stable Nash equilibria, ensuring conflict‐free coordination, and achieves a price of stability equal to one for scalar cost functions, indicating that decentralized decisions match centralized optimal performance. Moreover, both computational and communication complexities scale linearly with the team size, facilitating large‐scale deployment. By embedding real‐time obstacle avoidance and dynamic formation reconfiguration, the method exhibits resilient behavior across aerial and marine scenarios. Extensive simulations and real‐world–inspired experiments validate superior collision avoidance, rapid formation recovery after disturbances, and efficient decision‐making without excessive communication overhead. This work advances autonomous multi‐robot systems by delivering a scalable, adaptive strategy suitable for mission‐critical operations in complex and uncertain settings.

Downloads

Published

2025-06-11

Issue

Section

Articles