The premise of our second approach, NEGATER- ∇, is that the candidates that most “surprise” the LM when labeled as true are the most likely to be negative, because they most directly contradict what the LM has observed during fine-tuning. contradicts or negates the LM's positive beliefs.
Nov 7, 2021
This paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model (LM).
Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, ...
... NegatER [25, 33] is a recent method for identifying salient negations in commonsense KBs. Given a subject (an everyday concept in this case) ...
This repository contains the data and PyTorch implementation of the EMNLP 2021 paper NegatER: Generating Negatives in Commonsense Knowledge Bases by Mining ...
As a first step toward the latter, this paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model ...
Award ID(s):: 1845491. PAR ID: 10318741. Author(s) / Creator(s):: Safavi, Tara; Zhu, Jing; Koutra, Danai. Date Published: 2021-11-01. Journal Name: EMNLP.
Unsupervised Discovery of Negatives in Commonsense Knowledge ...
dblp.org › conf › emnlp › SafaviZK21
Tara Safavi, Jing Zhu, Danai Koutra: NegatER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases. EMNLP (1) 2021: 5633-5646.
On-demand video platform giving you access to lectures from conferences worldwide.
NegatER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases. T. Safavi, J. Zhu, and D. Koutra. EMNLP (1), page 5633-5646. Association for ...