default search action
Tom McCoy 0001
Person information
- affiliation: Princeton University, NJ, USA
- affiliation (former): Johns Hopkins University, Baltimore, MD, USA
- affiliation (former): Yale University, New Haven, CT, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c19]Akshara Prabhakar, Thomas L. Griffiths, R. Thomas McCoy:
Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy Reasoning. EMNLP (Findings) 2024: 3710-3724 - [c18]Nathan A. Chi, Teodor Malchev, Riley Kong, Ryan Chi, Lucas Huang, Ethan A. Chi, R. Thomas McCoy, Dragomir Radev:
ModeLing: A Novel Dataset for Testing Linguistic Reasoning in Language Models. SIGTYPE 2024: 113-119 - [i27]Ioana Marinescu, R. Thomas McCoy, Thomas L. Griffiths:
Distilling Symbolic Priors for Concept Learning into Neural Networks. CoRR abs/2402.07035 (2024) - [i26]Nathan A. Chi, Teodor Malchev, Riley Kong, Ryan A. Chi, Lucas Huang, Ethan A. Chi, R. Thomas McCoy, Dragomir Radev:
modeLing: A Novel Dataset for Testing Linguistic Reasoning in Language Models. CoRR abs/2406.17038 (2024) - [i25]Zhenghao Zhou, Robert Frank, R. Thomas McCoy:
Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming. CoRR abs/2406.18501 (2024) - [i24]Akshara Prabhakar, Thomas L. Griffiths, R. Thomas McCoy:
Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy Reasoning. CoRR abs/2407.01687 (2024) - [i23]R. Thomas McCoy, Shunyu Yao, Dan Friedman, Mathew D. Hardy, Thomas L. Griffiths:
When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1. CoRR abs/2410.01792 (2024) - 2023
- [j3]R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, Asli Celikyilmaz:
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN. Trans. Assoc. Comput. Linguistics 11: 652-670 (2023) - [c17]Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy:
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech. ACL (1) 2023: 9370-9393 - [i22]Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy:
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech. CoRR abs/2301.11462 (2023) - [i21]R. Thomas McCoy, Thomas L. Griffiths:
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. CoRR abs/2305.14701 (2023) - [i20]R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L. Griffiths:
Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. CoRR abs/2309.13638 (2023) - [i19]Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy:
Bayes in the age of intelligent machines. CoRR abs/2311.10206 (2023) - [i18]Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths:
Deep de Finetti: Recovering Topic Distributions from Large Language Models. CoRR abs/2312.14226 (2023) - 2022
- [j2]Paul Smolensky, Richard Thomas McCoy, Roland Fernandez, Matthew Goldrick, Jianfeng Gao:
Neurocompositional Computing: From the Central Paradox of Cognition to a New Generation of AI Systems. AI Mag. 43(3): 308-322 (2022) - [i17]Paul Smolensky, R. Thomas McCoy, Roland Fernandez, Matthew Goldrick, Jianfeng Gao:
Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems. CoRR abs/2205.01128 (2022) - [i16]Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, Paul Smolensky:
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages. CoRR abs/2208.06061 (2022) - 2021
- [c16]Richard Thomas McCoy, Jennifer Culbertson, Paul Smolensky, Geraldine Legendre:
Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar. CogSci 2021 - [i15]R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, Asli Celikyilmaz:
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN. CoRR abs/2111.09509 (2021) - 2020
- [j1]R. Thomas McCoy, Robert Frank, Tal Linzen:
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks. Trans. Assoc. Comput. Linguistics 8: 125-140 (2020) - [c15]Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal Linzen:
Syntactic Data Augmentation Increases Robustness to Inference Heuristics. ACL 2020: 2339-2352 - [c14]Michael A. Lepori, Tal Linzen, R. Thomas McCoy:
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs. ACL 2020: 3306-3316 - [c13]R. Thomas McCoy, Junghyun Min, Tal Linzen:
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. BlackboxNLP@EMNLP 2020: 217-227 - [c12]Paul Soulos, R. Thomas McCoy, Tal Linzen, Paul Smolensky:
Discovering the Compositional Structure of Vector Representations with Role Learning Networks. BlackboxNLP@EMNLP 2020: 238-254 - [c11]Richard Thomas McCoy, Erin Grant, Paul Smolensky, Tom Griffiths, Tal Linzen:
Universal linguistic inductive biases via meta-learning. CogSci 2020 - [c10]Michael A. Lepori, R. Thomas McCoy:
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis. COLING 2020: 3637-3651 - [i14]R. Thomas McCoy, Robert Frank, Tal Linzen:
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. CoRR abs/2001.03632 (2020) - [i13]Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal Linzen:
Syntactic Data Augmentation Increases Robustness to Inference Heuristics. CoRR abs/2004.11999 (2020) - [i12]Michael A. Lepori, Tal Linzen, R. Thomas McCoy:
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs. CoRR abs/2005.00019 (2020) - [i11]R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen:
Universal linguistic inductive biases via meta-learning. CoRR abs/2006.16324 (2020) - [i10]Michael A. Lepori, R. Thomas McCoy:
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis. CoRR abs/2011.12073 (2020)
2010 – 2019
- 2019
- [c9]Tom McCoy, Ellie Pavlick, Tal Linzen:
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. ACL (1) 2019: 3428-3448 - [c8]Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman:
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling. ACL (1) 2019: 4465-4476 - [c7]R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky:
RNNs implicitly implement tensor-product representations. ICLR (Poster) 2019 - [c6]Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, Ellie Pavlick:
What do you learn from context? Probing for sentence structure in contextualized word representations. ICLR (Poster) 2019 - [c5]Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick:
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension. *SEM@NAACL-HLT 2019: 235-249 - [i9]R. Thomas McCoy, Ellie Pavlick, Tal Linzen:
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. CoRR abs/1902.01007 (2019) - [i8]Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick:
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension. CoRR abs/1904.11544 (2019) - [i7]Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, Ellie Pavlick:
What do you learn from context? Probing for sentence structure in contextualized word representations. CoRR abs/1905.06316 (2019) - [i6]Paul Soulos, Tom McCoy, Tal Linzen, Paul Smolensky:
Discovering the Compositional Structure of Vector Representations with Role Learning Networks. CoRR abs/1910.09113 (2019) - [i5]R. Thomas McCoy, Junghyun Min, Tal Linzen:
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. CoRR abs/1911.02969 (2019) - 2018
- [c4]Richard Thomas McCoy, Robert Frank, Tal Linzen:
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. CogSci 2018 - [c3]Patrick Littell, Tom McCoy, Na-Rae Han, Shruti Rijhwani, Zaid Sheikh, David R. Mortensen, Teruko Mitamura, Lori S. Levin:
Parser combinators for Tigrinya and Oromo morphology. LREC 2018 - [i4]R. Thomas McCoy, Robert Frank, Tal Linzen:
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. CoRR abs/1802.09091 (2018) - [i3]R. Thomas McCoy, Tal Linzen:
Non-entailed subsequences as a challenge for natural language inference. CoRR abs/1811.12112 (2018) - [i2]R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky:
RNNs Implicitly Implement Tensor Product Representations. CoRR abs/1812.08718 (2018) - [i1]Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen:
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling. CoRR abs/1812.10860 (2018) - 2017
- [c2]Jungo Kasai, Bob Frank, R. Thomas McCoy, Owen Rambow, Alexis Nasr:
TAG Parsing with Neural Networks and Vector Representations of Supertags. EMNLP 2017: 1712-1722 - [c1]Dan Friedman, Jungo Kasai, R. Thomas McCoy, Robert Frank, Forrest Davis, Owen Rambow:
Linguistically Rich Vector Representations of Supertags for TAG Parsing. TAG 2017: 122-131
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-19 20:49 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint