Message-aware graph attention networks for large-scale multi-robot path planning
The domains of transport and logistics are increasingly relying on autonomous mobile
robots for the handling and distribution of passengers or resources. At large system scales,
finding decentralized path planning and coordination solutions is key to efficient system
performance. Recently, Graph Neural Networks (GNNs) have become popular due to their
ability to learn communication policies in decentralized multi-agent systems. Yet, vanilla
GNNs rely on simplistic message aggregation mechanisms that prevent agents from …
robots for the handling and distribution of passengers or resources. At large system scales,
finding decentralized path planning and coordination solutions is key to efficient system
performance. Recently, Graph Neural Networks (GNNs) have become popular due to their
ability to learn communication policies in decentralized multi-agent systems. Yet, vanilla
GNNs rely on simplistic message aggregation mechanisms that prevent agents from …
The domains of transport and logistics are increasingly relying on autonomous mobile robots for the handling and distribution of passengers or resources. At large system scales, finding decentralized path planning and coordination solutions is key to efficient system performance. Recently, Graph Neural Networks (GNNs) have become popular due to their ability to learn communication policies in decentralized multi-agent systems. Yet, vanilla GNNs rely on simplistic message aggregation mechanisms that prevent agents from prioritizing important information. To tackle this challenge, in this letter, we extend our previous work that utilizes GNNs in multi-agent path planning by incorporating a novel mechanism to allow for message-dependent attention. Our Message-Aware Graph Attention neTwork (MAGAT) is based on a key-query-like mechanism that determines the relative importance of features in the messages received from various neighboring robots. We show that MAGAT is able to achieve a performance close to that of a coupled centralized expert algorithm. Further, ablation studies and comparisons to several benchmark models show that our attention mechanism is very effective across different robot densities and performs stably in different constraints in communication bandwidth. Experiments demonstrate that our model is able to generalize well in previously unseen problem instances, and that it achieves a 47% improvement over the benchmark success rate, even in very large-scale instances that are ×100 larger than the training instances.
ieeexplore.ieee.org