Teaching Broad Reasoning Skills via Decomposition-Guided Contexts
H Trivedi, N Balasubramanian, T Khot… - Proceedings of the 2022 …, 2022 - par.nsf.gov
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022•par.nsf.gov
Question-answering datasets require a broad set of reasoning skills. We show how to use
question decompositions to teach language models these broad reasoning skills in a robust
fashion. Specifically, we use widely available QDMR representations to programmatically
create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning
datasets. These contexts are carefully designed to avoid common reasoning shortcuts
prevalent in real contexts that prevent models from learning the right skills. This results in a …
question decompositions to teach language models these broad reasoning skills in a robust
fashion. Specifically, we use widely available QDMR representations to programmatically
create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning
datasets. These contexts are carefully designed to avoid common reasoning shortcuts
prevalent in real contexts that prevent models from learning the right skills. This results in a …
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid common reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
par.nsf.gov
Showing the best result for this search. See all results