Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
DOI:
https://doi.org/10.1609/aaai.v35i14.17492Keywords:
Interpretaility & Analysis of NLP Models, Nonmonotonic ReasoningAbstract
The black-box nature of neural models has motivated a line of research that aims to generate natural language rationales to explain why a model made certain predictions. Such rationale generation models, to date, have been trained on dataset-specific crowdsourced rationales, but this approach is costly and is not generalizable to new tasks and domains. In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales. We investigate multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and train generative models capable of composing explanatory rationales for unseen instances. We demonstrate our approach on the defeasible inference task, a nonmonotonic reasoning task in which an inference may be strengthened or weakened when new information (an update) is introduced. Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information, however, it mostly generates trivial rationales reflecting the fundamental limitations of neural language models. Conversely, the more realistic setup of jointly predicting the update or its type and generating rationale is more challenging, suggesting an important future direction.Downloads
Published
2021-05-18
How to Cite
Brahman, F., Shwartz, V., Rudinger, R., & Choi, Y. (2021). Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12592-12601. https://doi.org/10.1609/aaai.v35i14.17492
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing I