Abstract
Multi-agent path planning (MAPP) is crucial for large-scale mobile robot systems to work safely and properly in complex environments. Existing learning-based decentralized MAPP approaches allow each agent to gather information from nearby agents, leading to more efficient coordination among agents. However, these approaches often struggle with reasonably handling local information inputs for each agent, and their communication mechanisms between agents need to be further refined to treat those congested traffic scenarios effectively. To address these issues, we propose a decentralized MAPP approach based on imitation learning and selective communication. Our approach adopts an imitation learning architecture that enables agents to rapidly learn complex behaviors from expert planning experience. The information extraction layer is integrated with convolutional neural network (CNN) and gated recurrent unit (GRU) for capturing features from local field-of-view observations. A two-stage selective communication process based on graph attention neural network (GAT) is developed to reduce the required neighbor agents in inter-agent communication. In addition, an adaptive strategy switching mechanism utilizing local expert-planned paths is designed to support robots to escape from local traps. The effectiveness of our proposed approach is evaluated in simulated grid environments with varying map sizes, obstacle densities, and numbers of agents. Experimental results show that our approach outperforms other decentralized path planning methods in success rate while maintaining the lowest flowtime variation and communication frequency. Furthermore, our approach is computationally efficient and scalable, making it suitable for real-world applications.