default search action
Soravit Changpinyo
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c24]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, A. J. Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut:
On Scaling Up a Multilingual Vision and Language Model. CVPR 2024: 14432-14444 - 2023
- [c23]Paul Voigtlaender, Soravit Changpinyo, Jordi Pont-Tuset, Radu Soricut, Vittorio Ferrari:
Connecting Vision and Language with Video Localized Narratives. CVPR 2023: 2461-2471 - [c22]Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas J. Guibas, William T. Freeman, Yuanzhen Li, Varun Jampani:
MetaCLUE: Towards Comprehensive Visual Metaphors Research. CVPR 2023: 23201-23211 - [c21]Soravit Changpinyo, Linting Xue, Michal Yarom, Ashish V. Thapliyal, Idan Szpektor, Julien Amelot, Xi Chen, Radu Soricut:
MaXM: Towards Multilingual Visual Question Answering. EMNLP (Findings) 2023: 2667-2682 - [c20]Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang:
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? EMNLP 2023: 14948-14968 - [c19]Jihyung Kil, Soravit Changpinyo, Xi Chen, Hexiang Hu, Sebastian Goodman, Wei-Lun Chao, Radu Soricut:
PreSTU: Pre-Training for Scene-Text Understanding. ICCV 2023: 15224-15234 - [c18]Xi Chen, Xiao Wang, Soravit Changpinyo, A. J. Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish V. Thapliyal, James Bradbury, Weicheng Kuo:
PaLI: A Jointly-Scaled Multilingual Language-Image Model. ICLR 2023 - [c17]Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, Idan Szpektor:
What You See is What You Read? Improving Text-Image Alignment Evaluation. NeurIPS 2023 - [i23]Paul Voigtlaender, Soravit Changpinyo, Jordi Pont-Tuset, Radu Soricut, Vittorio Ferrari:
Connecting Vision and Language with Video Localized Narratives. CoRR abs/2302.11217 (2023) - [i22]Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang:
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? CoRR abs/2302.11713 (2023) - [i21]Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, Idan Szpektor:
What You See is What You Read? Improving Text-Image Alignment Evaluation. CoRR abs/2305.10400 (2023) - [i20]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, A. J. Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut:
PaLI-X: On Scaling up a Multilingual Vision and Language Model. CoRR abs/2305.18565 (2023) - 2022
- [j3]Yu-Chuan Su, Soravit Changpinyo, Xiangning Chen, Sathish Thoppay, Cho-Jui Hsieh, Lior Shapira, Radu Soricut, Hartwig Adam, Matthew Brown, Ming-Hsuan Yang, Boqing Gong:
2.5D visual relationship detection. Comput. Vis. Image Underst. 224: 103557 (2022) - [c16]Khyathi Raghavi Chandu, Piyush Sharma, Soravit Changpinyo, Ashish V. Thapliyal, Radu Soricut:
Denoising Large-Scale Image Captioning from Alt-text Data Using Content Selection Models. COLING 2022: 6089-6104 - [c15]Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, Radu Soricut:
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks. ECCV (34) 2022: 252-268 - [c14]Soravit Changpinyo, Doron Kukliansky, Idan Szpektor, Xi Chen, Nan Ding, Radu Soricut:
All You May Need for VQA are Image Captions. NAACL-HLT 2022: 1947-1963 - [i19]Soravit Changpinyo, Doron Kukliansky, Idan Szpektor, Xi Chen, Nan Ding, Radu Soricut:
All You May Need for VQA are Image Captions. CoRR abs/2205.01883 (2022) - [i18]Soravit Changpinyo, Linting Xue, Idan Szpektor, Ashish V. Thapliyal, Julien Amelot, Xi Chen, Radu Soricut:
Towards Multi-Lingual Visual Question Answering. CoRR abs/2209.05401 (2022) - [i17]Jihyung Kil, Soravit Changpinyo, Xi Chen, Hexiang Hu, Sebastian Goodman, Wei-Lun Chao, Radu Soricut:
PreSTU: Pre-Training for Scene-Text Understanding. CoRR abs/2209.05534 (2022) - [i16]Xi Chen, Xiao Wang, Soravit Changpinyo, A. J. Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish V. Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut:
PaLI: A Jointly-Scaled Multilingual Language-Image Model. CoRR abs/2209.06794 (2022) - [i15]Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas J. Guibas, William T. Freeman, Yuanzhen Li, Varun Jampani:
MetaCLUE: Towards Comprehensive Visual Metaphors Research. CoRR abs/2212.09898 (2022) - 2021
- [c13]Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut:
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. CVPR 2021: 3558-3568 - [c12]Arjun R. Akula, Soravit Changpinyo, Boqing Gong, Piyush Sharma, Song-Chun Zhu, Radu Soricut:
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization. EMNLP (1) 2021: 2148-2166 - [c11]Cheng Zhang, Tai-Yu Pan, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao:
MosaicOS: A Simple and Effective Use of Object-Centric Images for Long-Tailed Object Detection. ICCV 2021: 407-417 - [c10]Soravit Changpinyo, Jordi Pont-Tuset, Vittorio Ferrari, Radu Soricut:
Telling the What while Pointing to the Where: Multimodal Queries for Image Retrieval. ICCV 2021: 12116-12126 - [c9]Tai-Yu Pan, Cheng Zhang, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao:
On Model Calibration for Long-Tailed Object Detection and Instance Segmentation. NeurIPS 2021: 2529-2542 - [c8]Arjun R. Akula, Varun Jampani, Soravit Changpinyo, Song-Chun Zhu:
Robust Visual Reasoning via Language Guided Neural Module Networks. NeurIPS 2021: 11041-11053 - [i14]Soravit Changpinyo, Jordi Pont-Tuset, Vittorio Ferrari, Radu Soricut:
Telling the What while Pointing the Where: Fine-grained Mouse Trace and Language Supervision for Improved Image Retrieval. CoRR abs/2102.04980 (2021) - [i13]Cheng Zhang, Tai-Yu Pan, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao:
A Simple and Effective Use of Object-Centric Images for Long-Tailed Object Detection. CoRR abs/2102.08884 (2021) - [i12]Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut:
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. CoRR abs/2102.08981 (2021) - [i11]Yu-Chuan Su, Soravit Changpinyo, Xiangning Chen, Sathish Thoppay, Cho-Jui Hsieh, Lior Shapira, Radu Soricut, Hartwig Adam, Matthew Brown, Ming-Hsuan Yang, Boqing Gong:
2.5D Visual Relationship Detection. CoRR abs/2104.12727 (2021) - [i10]Tai-Yu Pan, Cheng Zhang, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao:
On Model Calibration for Long-Tailed Object Detection and Instance Segmentation. CoRR abs/2107.02170 (2021) - 2020
- [j2]Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha:
Classifier and Exemplar Synthesis for Zero-Shot Learning. Int. J. Comput. Vis. 128(1): 166-201 (2020) - [j1]Ben McCamish, Vahid Ghadakchi, Arash Termehchy, Behrouz Touri, Eduardo Cotilla Sanchez, Liang Huang, Soravit Changpinyo:
A Game-theoretic Approach to Data Interaction. ACM Trans. Database Syst. 45(1): 1:1-1:44 (2020) - [c7]Jordi Pont-Tuset, Jasper R. R. Uijlings, Soravit Changpinyo, Radu Soricut, Vittorio Ferrari:
Connecting Vision and Language with Localized Narratives. ECCV (5) 2020: 647-664 - [i9]Khyathi Raghavi Chandu, Piyush Sharma, Soravit Changpinyo, Ashish V. Thapliyal, Radu Soricut:
Weakly Supervised Content Selection for Improved Image Captioning. CoRR abs/2009.05175 (2020)
2010 – 2019
- 2019
- [c6]Soravit Changpinyo, Bo Pang, Piyush Sharma, Radu Soricut:
Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering. EMNLP/IJCNLP (1) 2019: 1468-1474 - [i8]Soravit Changpinyo, Bo Pang, Piyush Sharma, Radu Soricut:
Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering. CoRR abs/1909.02097 (2019) - [i7]Jordi Pont-Tuset, Jasper R. R. Uijlings, Soravit Changpinyo, Radu Soricut, Vittorio Ferrari:
Connecting Vision and Language with Localized Narratives. CoRR abs/1912.03098 (2019) - 2018
- [c5]Soravit Changpinyo, Hexiang Hu, Fei Sha:
Multi-Task Learning for Sequence Tagging: An Empirical Study. COLING 2018: 2965-2977 - [i6]Soravit Changpinyo, Hexiang Hu, Fei Sha:
Multi-Task Learning for Sequence Tagging: An Empirical Study. CoRR abs/1808.04151 (2018) - [i5]Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha:
Classifier and Exemplar Synthesis for Zero-Shot Learning. CoRR abs/1812.06423 (2018) - 2017
- [c4]Soravit Changpinyo, Wei-Lun Chao, Fei Sha:
Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning. ICCV 2017: 3496-3505 - [i4]Soravit Changpinyo, Mark Sandler, Andrey Zhmoginov:
The Power of Sparsity in Convolutional Neural Networks. CoRR abs/1702.06257 (2017) - 2016
- [c3]Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha:
Synthesized Classifiers for Zero-Shot Learning. CVPR 2016: 5327-5336 - [c2]Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, Fei Sha:
An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild. ECCV (2) 2016: 52-68 - [i3]Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha:
Synthesized Classifiers for Zero-Shot Learning. CoRR abs/1603.00550 (2016) - [i2]Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, Fei Sha:
An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild. CoRR abs/1605.04253 (2016) - [i1]Soravit Changpinyo, Wei-Lun Chao, Fei Sha:
Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning. CoRR abs/1605.08151 (2016) - 2013
- [c1]Soravit Changpinyo, Kuan Liu, Fei Sha:
Similarity Component Analysis. NIPS 2013: 1511-1519
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-08 21:33 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint