Document-level relation extraction with structure enhanced transformer encoder
2022 International Joint Conference on Neural Networks (IJCNN), 2022•ieeexplore.ieee.org
Document-level relation extraction aims at discovering relational facts among entity pairs in
a document, which has attracted more and more attention in recent years. Most existing
methods are mainly summarized as graph-based and transformer-based methods.
However, previous transformer-based methods neglect structural information between
entities, while graph-based methods are unable to extract structural information effectively
on account that they isolate the en-coding stage and structure reasoning stage. In this paper …
a document, which has attracted more and more attention in recent years. Most existing
methods are mainly summarized as graph-based and transformer-based methods.
However, previous transformer-based methods neglect structural information between
entities, while graph-based methods are unable to extract structural information effectively
on account that they isolate the en-coding stage and structure reasoning stage. In this paper …
Document-level relation extraction aims at discovering relational facts among entity pairs in a document, which has attracted more and more attention in recent years. Most existing methods are mainly summarized as graph-based and transformer-based methods. However, previous transformer-based methods neglect structural information between entities, while graph-based methods are unable to extract structural information effectively on account that they isolate the en-coding stage and structure reasoning stage. In this paper, we propose an effective structure enhanced transformer encoder model (SETE), integrating entity structural information into the transformer encoder. We first define a mention-level graph based on mention dependencies and convert it to a token-level graph. Then we design a dual self-attention mechanism, which enriches the structural and contextual information between entities to increase the vanilla transformer encoder inferential capability. Experiments on three public datasets show that the proposed SETE outperforms previous state-of-the-art methods and further analyses illustrate the interpretability of our model.
ieeexplore.ieee.org
Showing the best result for this search. See all results