How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View
DOI:
https://doi.org/10.1609/aaai.v36i5.20521Keywords:
Knowledge Representation And Reasoning (KRR), Data Mining & Knowledge Management (DMKM)Abstract
Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability. Our code is available at https://github.com/renli1024/SE-GNN.Downloads
Published
2022-06-28
How to Cite
Li, R., Cao, Y., Zhu, Q., Bi, G., Fang, F., Liu, Y., & Li, Q. (2022). How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5781-5791. https://doi.org/10.1609/aaai.v36i5.20521
Issue
Section
AAAI Technical Track on Knowledge Representation and Reasoning