KDLGT: A Linear Graph Transformer Framework via Kernel Decomposition Approach
KDLGT: A Linear Graph Transformer Framework via Kernel Decomposition Approach
Yi Wu, Yanyang Xu, Wenhao Zhu, Guojie Song, Zhouchen Lin, Liang Wang, Shaoguo Liu
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 2370-2378.
https://doi.org/10.24963/ijcai.2023/263
In recent years, graph Transformers (GTs) have been demonstrated as a robust architecture for a wide range of graph learning tasks. However, the quadratic complexity of GTs limits their scalability on large-scale data, in comparison to Graph Neural Networks (GNNs). In this work, we propose the Kernel Decomposition Linear Graph Transformer (KDLGT), an accelerating framework for building scalable and powerful GTs. KDLGT employs the kernel decomposition approach to rearrange the order of matrix multiplication, thereby reducing complexity to linear. Additionally, it categorizes GTs into three distinct types and provides tailored accelerating methods for each category to encompass all types of GTs. Furthermore, we provide a theoretical analysis of the performance gap between KDLGT and self-attention to ensure its effectiveness. Under this framework, we select two representative GTs to design our models. Experiments on both real-world and synthetic datasets indicate that KDLGT not only achieves state-of-the-art performance on various datasets but also reaches an acceleration ratio of approximately 10 on graphs of certain sizes.
Keywords:
Data Mining: DM: Big data and scalability
Data Mining: DM: Mining graphs