default search action
Mark Yatskar
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c30]Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark:
Holodeck: Language Guided Generation of 3D Embodied AI Environments. CVPR 2024: 16277-16287 - [c29]Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu:
CoMo: Controllable Motion Generation Through Language Guided Pose Code Editing. ECCV (29) 2024: 180-196 - [c28]Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, Dan Roth:
ExpertQA: Expert-Curated Questions and Attributed Answers. NAACL-HLT 2024: 3025-3045 - [c27]Chaitanya Malaviya, Subin Lee, Dan Roth, Mark Yatskar:
What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception. NAACL-HLT 2024: 3046-3065 - [i33]Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu:
CoMo: Controllable Motion Generation through Language Guided Pose Code Editing. CoRR abs/2403.13900 (2024) - [i32]Chaitanya Malaviya, Priyanka Agrawal, Kuzman Ganchev, Pranesh Srinivasan, Fantine Huot, Jonathan Berant, Mark Yatskar, Dipanjan Das, Mirella Lapata, Chris Alberti:
DOLOMITES: Domain-Specific Long-Form Methodical Tasks. CoRR abs/2405.05938 (2024) - [i31]Yue Yang, Mona Gandhi, Yufei Wang, Yifan Wu, Michael S. Yao, Chris Callison-Burch, James C. Gee, Mark Yatskar:
A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis. CoRR abs/2405.14839 (2024) - [i30]Renyi Qu, Mark Yatskar:
LLM-based Hierarchical Concept Decomposition for Interpretable Fine-Grained Image Classification. CoRR abs/2405.18672 (2024) - [i29]Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, Jiasen Lu, Taira Anderson, Erin Bransom, Kiana Ehsani, Huong Ngo, Yen-Sung Chen, Ajay Patel, Mark Yatskar, Chris Callison-Burch, Andrew Head, Rose Hendrix, Favyen Bastani, Eli VanderBilt, Nathan Lambert, Yvonne Chou, Arnavi Chheda, Jenna Sparks, Sam Skjonsberg, Michael Schmitz, Aaron Sarnat, Byron Bischoff, Pete Walsh, Chris Newell, Piper Wolters, Tanmay Gupta, Kuo-Hao Zeng, Jon Borchardt, Dirk Groeneveld, Jen Dumas, Crystal Nam, Sophie Lebrecht, Caitlin Wittlif, Carissa Schoenick, Oscar Michel, Ranjay Krishna, Luca Weihs, Noah A. Smith, Hannaneh Hajishirzi, Ross B. Girshick, Ali Farhadi, Aniruddha Kembhavi:
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models. CoRR abs/2409.17146 (2024) - 2023
- [c26]Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, Mark Yatskar:
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification. CVPR 2023: 19187-19197 - [c25]Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar:
AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference. EACL (Findings) 2023: 993-1000 - [i28]Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar:
AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference. CoRR abs/2302.00762 (2023) - [i27]Xingyu Fu, Ben Zhou, Sihao Chen, Mark Yatskar, Dan Roth:
Interpretable by Design Visual Question Answering. CoRR abs/2305.14882 (2023) - [i26]Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, Dan Roth:
ExpertQA: Expert-Curated Questions and Attributed Answers. CoRR abs/2309.07852 (2023) - [i25]Josh Magnus Ludan, Qing Lyu, Yue Yang, Liam Dugan, Mark Yatskar, Chris Callison-Burch:
Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck. CoRR abs/2310.19660 (2023) - [i24]Chaitanya Malaviya, Subin Lee, Dan Roth, Mark Yatskar:
Pachinko: Patching Interpretable QA Models through Natural Language Feedback. CoRR abs/2311.09558 (2023) - [i23]Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark:
Holodeck: Language Guided Generation of 3D Embodied AI Environments. CoRR abs/2312.09067 (2023) - 2022
- [c24]Yue Yang, Artemis Panagopoulou, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch:
Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction. EMNLP (Findings) 2022: 638-655 - [c23]Chaitanya Malaviya, Sudeep Bhatia, Mark Yatskar:
Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models. EMNLP 2022: 6525-6540 - [i22]Yue Yang, Artemis Panagopoulou, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch:
Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction. CoRR abs/2210.12905 (2022) - [i21]Chaitanya Malaviya, Sudeep Bhatia, Mark Yatskar:
Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models. CoRR abs/2210.13439 (2022) - [i20]Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, Mark Yatskar:
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification. CoRR abs/2211.11158 (2022) - 2021
- [c22]Arka Sadhu, Tanmay Gupta, Mark Yatskar, Ram Nevatia, Aniruddha Kembhavi:
Visual Semantic Role Labeling for Video Understanding. CVPR 2021: 5589-5600 - [c21]Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, Carissa Schoenick, Aaron Sarnat, Hannaneh Hajishirzi, Aniruddha Kembhavi, Oren Etzioni, Ali Farhadi:
Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text. EMNLP (1) 2021: 1864-1886 - [c20]Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch:
Visual Goal-Step Inference using wikiHow. EMNLP (1) 2021: 2167-2179 - [i19]Arka Sadhu, Tanmay Gupta, Mark Yatskar, Ram Nevatia, Aniruddha Kembhavi:
Visual Semantic Role Labeling for Video Understanding. CoRR abs/2104.00990 (2021) - [i18]Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch:
Visual Goal-Step Inference using wikiHow. CoRR abs/2104.05845 (2021) - [i17]Yue Yang, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, Chris Callison-Burch:
Induce, Edit, Retrieve: Language Grounded Multimodal Schema for Instructional Video Retrieval. CoRR abs/2111.09276 (2021) - [i16]Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, Carissa Schoenick, Aaron Sarnat, Hannaneh Hajishirzi, Aniruddha Kembhavi, Oren Etzioni, Ali Farhadi:
Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text. CoRR abs/2112.00800 (2021) - 2020
- [c19]Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang:
What Does BERT with Vision Look At? ACL 2020: 5265-5275 - [c18]Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi:
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. CVPR 2020: 3161-3171 - [c17]Sarah M. Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, Aniruddha Kembhavi:
Grounded Situation Recognition. ECCV (4) 2020: 314-332 - [c16]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. EMNLP (Findings) 2020: 3031-3045 - [i15]Sarah M. Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, Aniruddha Kembhavi:
Grounded Situation Recognition. CoRR abs/2003.12058 (2020) - [i14]Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi:
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. CoRR abs/2004.06799 (2020) - [i13]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. CoRR abs/2011.03856 (2020)
2010 – 2019
- 2019
- [c15]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. EMNLP/IJCNLP (1) 2019: 4067-4080 - [c14]Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordonez:
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations. ICCV 2019: 5309-5318 - [c13]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang:
Gender Bias in Contextualized Word Embeddings. NAACL-HLT (1) 2019: 629-634 - [c12]Mark Yatskar:
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC. NAACL-HLT (1) 2019: 2318-2323 - [i12]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang:
Gender Bias in Contextualized Word Embeddings. CoRR abs/1904.03310 (2019) - [i11]Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang:
VisualBERT: A Simple and Performant Baseline for Vision and Language. CoRR abs/1908.03557 (2019) - [i10]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. CoRR abs/1909.03683 (2019) - 2018
- [c11]Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi:
Neural Motifs: Scene Graph Parsing With Global Context. CVPR 2018: 5831-5840 - [c10]Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer:
QuAC: Question Answering in Context. EMNLP 2018: 2174-2184 - [c9]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang:
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. NAACL-HLT (2) 2018: 15-20 - [i9]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang:
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. CoRR abs/1804.06876 (2018) - [i8]Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer:
QuAC : Question Answering in Context. CoRR abs/1808.07036 (2018) - [i7]Mark Yatskar:
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC. CoRR abs/1809.10735 (2018) - [i6]Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordonez:
Adversarial Removal of Gender from Deep Image Representations. CoRR abs/1811.08489 (2018) - 2017
- [c8]Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer:
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. ACL (1) 2017: 146-157 - [c7]Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi:
Commonly Uncommon: Semantic Sparsity in Situation Recognition. CVPR 2017: 6335-6344 - [c6]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang:
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. EMNLP 2017: 2979-2989 - [i5]Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer:
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. CoRR abs/1704.08381 (2017) - [i4]Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang:
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. CoRR abs/1707.09457 (2017) - [i3]Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi:
Neural Motifs: Scene Graph Parsing with Global Context. CoRR abs/1711.06640 (2017) - 2016
- [c5]Mark Yatskar, Luke Zettlemoyer, Ali Farhadi:
Situation Recognition: Visual Semantic Role Labeling for Image Understanding. CVPR 2016: 5534-5542 - [c4]Mark Yatskar, Vicente Ordonez, Ali Farhadi:
Stating the Obvious: Extracting Visual Common Sense Knowledge. HLT-NAACL 2016: 193-198 - [i2]Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi:
Commonly Uncommon: Semantic Sparsity in Situation Recognition. CoRR abs/1612.00901 (2016) - 2014
- [c3]Mark Yatskar, Michel Galley, Lucy Vanderwende, Luke Zettlemoyer:
See No Evil, Say No Evil: Description Generation from Densely Labeled Images. *SEM@COLING 2014: 110-120 - 2013
- [c2]Mark Yatskar, Svitlana Volkova, Asli Celikyilmaz, Bill Dolan, Luke Zettlemoyer:
Learning to Relate Literal and Sentimental Descriptions of Visual Properties. HLT-NAACL 2013: 416-425 - 2010
- [c1]Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee:
For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. HLT-NAACL 2010: 365-368 - [i1]Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee:
For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia. CoRR abs/1008.1986 (2010)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-18 20:45 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint