default search action
Yonglong Tian
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c31]Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian:
Scaling Laws of Synthetic Images for Model Training ... for Now. CVPR 2024: 7382-7392 - [c30]Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola:
Learning Vision from Models Rivals Learning Vision from Data. CVPR 2024: 15887-15898 - [c29]Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan:
Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency. ICLR 2024 - [c28]Nate Gillman, Michael Freeman, Daksh Aggarwal, Chia-Hong Hsu, Calvin Luo, Yonglong Tian, Chen Sun:
Self-Correcting Self-Consuming Loops for Generative Model Training. ICML 2024 - [i29]Jiawei Yang, Katie Z. Luo, Jiefeng Li, Kilian Q. Weinberger, Yonglong Tian, Yue Wang:
Denoising Vision Transformers. CoRR abs/2401.02957 (2024) - [i28]Nate Gillman, Michael Freeman, Daksh Aggarwal, Chia-Hong Hsu, Calvin Luo, Yonglong Tian, Chen Sun:
Self-Correcting Self-Consuming Loops for Generative Model Training. CoRR abs/2402.07087 (2024) - [i27]Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, Kaiming He:
Autoregressive Image Generation without Vector Quantization. CoRR abs/2406.11838 (2024) - 2023
- [b1]Yonglong Tian:
Towards General-purpose Vision via Multiview Contrastive Learning. MIT, USA, 2023 - [c27]Tianyu Hua, Yonglong Tian, Sucheng Ren, Michalis Raptis, Hang Zhao, Leonid Sigal:
Self-supervision through Random Segments with Autoregressive Coding (RandSAC). ICLR 2023 - [c26]Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, Russ Tedrake:
Does Learning from Decentralized Non-IID Unlabeled Data Benefit from Self Supervision? ICLR 2023 - [c25]Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi S. Jaakkola:
PFGM++: Unlocking the Potential of Physics-Inspired Generative Models. ICML 2023: 38566-38591 - [c24]Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, Yonglong Tian:
Improving CLIP Training with Language Rewrites. NeurIPS 2023 - [c23]Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, Dilip Krishnan:
StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners. NeurIPS 2023 - [c22]Yilun Xu, Mingyang Deng, Xiang Cheng, Yonglong Tian, Ziming Liu, Tommi S. Jaakkola:
Restart Sampling for Improving Generative Processes. NeurIPS 2023 - [c21]Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogério Feris, Piotr Indyk, Dina Katabi:
Addressing Feature Suppression in Unsupervised Visual Representations. WACV 2023: 1411-1420 - [i26]Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi S. Jaakkola:
PFGM++: Unlocking the Potential of Physics-Inspired Generative Models. CoRR abs/2302.04265 (2023) - [i25]Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, Yonglong Tian:
Improving CLIP Training with Language Rewrites. CoRR abs/2305.20088 (2023) - [i24]Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, Dilip Krishnan:
StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners. CoRR abs/2306.00984 (2023) - [i23]Yilun Xu, Mingyang Deng, Xiang Cheng, Yonglong Tian, Ziming Liu, Tommi S. Jaakkola:
Restart Sampling for Improving Generative Processes. CoRR abs/2306.14878 (2023) - [i22]Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan:
Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency. CoRR abs/2310.03734 (2023) - [i21]Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian:
Scaling Laws of Synthetic Images for Model Training ... for Now. CoRR abs/2312.04567 (2023) - [i20]Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola:
Learning Vision from Models Rivals Learning Vision from Data. CoRR abs/2312.17742 (2023) - 2022
- [c20]Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit:
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate. AAAI 2022: 10042-10050 - [c19]Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao:
Co-advise: Cross Inductive Bias Distillation. CVPR 2022: 16752-16761 - [c18]Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola:
Generative Models as a Data Source for Multiview Representation Learning. ICLR 2022 - [c17]Boyang Deng, Sumith Kulal, Zhengyang Dong, Congyue Deng, Yonglong Tian, Jiajun Wu:
Unsupervised Learning of Shape Programs with Repeatable Implicit Parts. NeurIPS 2022 - [i19]Tianyu Hua, Yonglong Tian, Sucheng Ren, Hang Zhao, Leonid Sigal:
Self-supervision through Random Segments with Autoregressive Coding (RandSAC). CoRR abs/2203.12054 (2022) - [i18]Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, Russ Tedrake:
Does Decentralized Learning with Non-IID Unlabeled Data Benefit from Self Supervision? CoRR abs/2210.10947 (2022) - 2021
- [c16]Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid:
Composable Augmentation Encoding for Video Representation Learning. ICCV 2021: 8814-8824 - [c15]Yonglong Tian, Olivier J. Hénaff, Aäron van den Oord:
Divide and Contrast: Self-supervised Learning from Uncurated Data. ICCV 2021: 10043-10054 - [i17]Chen Sun, Arsha Nagrani, Yonglong Tian, Cordelia Schmid:
Composable Augmentation Encoding for Video Representation Learning. CoRR abs/2104.00616 (2021) - [i16]Yonglong Tian, Olivier J. Hénaff, Aäron van den Oord:
Divide and Contrast: Self-supervised Learning from Uncurated Data. CoRR abs/2105.08054 (2021) - [i15]Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola:
Generative Models as a Data Source for Multiview Representation Learning. CoRR abs/2106.05258 (2021) - [i14]Jindong Gu, Wei Liu, Yonglong Tian:
Simple Distillation Baselines for Improving Small Self-supervised Models. CoRR abs/2106.11304 (2021) - [i13]Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao:
Co-advise: Cross Inductive Bias Distillation. CoRR abs/2106.12378 (2021) - 2020
- [c14]Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola:
Rethinking Few-Shot Image Classification: A Good Embedding is All You Need? ECCV (14) 2020: 266-282 - [c13]Yonglong Tian, Dilip Krishnan, Phillip Isola:
Contrastive Multiview Coding. ECCV (11) 2020: 776-794 - [c12]Yonglong Tian, Dilip Krishnan, Phillip Isola:
Contrastive Representation Distillation. ICLR 2020 - [c11]Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan:
Supervised Contrastive Learning. NeurIPS 2020 - [c10]Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola:
What Makes for Good Views for Contrastive Learning? NeurIPS 2020 - [i12]Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola:
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need? CoRR abs/2003.11539 (2020) - [i11]Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan:
Supervised Contrastive Learning. CoRR abs/2004.11362 (2020) - [i10]Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola:
What makes for good views for contrastive learning. CoRR abs/2005.10243 (2020) - [i9]Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Dina Katabi:
Information-Preserving Contrastive Learning for Self-Supervised Representations. CoRR abs/2012.09962 (2020)
2010 – 2019
- 2019
- [c9]Hao He, Hao Wang, Guang-He Lee, Yonglong Tian:
ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees. ICLR (Poster) 2019 - [c8]Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu:
Learning to Infer and Execute 3D Shape Programs. ICLR (Poster) 2019 - [i8]Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu:
Learning to Infer and Execute 3D Shape Programs. CoRR abs/1901.02875 (2019) - [i7]Yonglong Tian, Dilip Krishnan, Phillip Isola:
Contrastive Multiview Coding. CoRR abs/1906.05849 (2019) - [i6]Lu Mi, Hao Wang, Yonglong Tian, Nir Shavit:
Training-Free Uncertainty Estimation for Neural Networks. CoRR abs/1910.04858 (2019) - [i5]Yonglong Tian, Dilip Krishnan, Phillip Isola:
Contrastive Representation Distillation. CoRR abs/1910.10699 (2019) - 2018
- [j2]Yonglong Tian, Guang-He Lee, Hao He, Chen-Yu Hsu, Dina Katabi:
RF-Based Fall Monitoring Using Convolutional Neural Networks. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(3): 137:1-137:24 (2018) - [c7]Mingmin Zhao, Tianhong Li, Mohammad Abu Alsheikh, Yonglong Tian, Hang Zhao, Antonio Torralba, Dina Katabi:
Through-Wall Human Pose Estimation Using Radio Signals. CVPR 2018: 7356-7365 - [c6]Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka:
Representation Learning on Graphs with Jumping Knowledge Networks. ICML 2018: 5449-5458 - [c5]Mingmin Zhao, Yonglong Tian, Hang Zhao, Mohammad Abu Alsheikh, Tianhong Li, Rumen Hristov, Zachary Kabelac, Dina Katabi, Antonio Torralba:
RF-based 3D skeletons. SIGCOMM 2018: 267-281 - [i4]Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka:
Representation Learning on Graphs with Jumping Knowledge Networks. CoRR abs/1806.03536 (2018) - 2017
- [j1]Wanli Ouyang, Xingyu Zeng, Xiaogang Wang, Shi Qiu, Ping Luo, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang, Hongyang Li, Kun Wang, Junjie Yan, Chen Change Loy, Xiaoou Tang:
DeepID-Net: Object Detection with Deformable Part Based Convolutional Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(7): 1320-1334 (2017) - 2015
- [c4]Wanli Ouyang, Xiaogang Wang, Xingyu Zeng, Shi Qiu, Ping Luo, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang, Chen Change Loy, Xiaoou Tang:
DeepID-Net: Deformable deep convolutional neural networks for object detection. CVPR 2015: 2403-2412 - [c3]Yonglong Tian, Ping Luo, Xiaogang Wang, Xiaoou Tang:
Pedestrian detection aided by deep learning semantic tasks. CVPR 2015: 5079-5087 - [c2]Yonglong Tian, Ping Luo, Xiaogang Wang, Xiaoou Tang:
Deep Learning Strong Parts for Pedestrian Detection. ICCV 2015: 1904-1912 - 2014
- [c1]Ping Luo, Yonglong Tian, Xiaogang Wang, Xiaoou Tang:
Switchable Deep Network for Pedestrian Detection. CVPR 2014: 899-906 - [i3]Wanli Ouyang, Ping Luo, Xingyu Zeng, Shi Qiu, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang, Yuanjun Xiong, Chen Qian, Zhenyao Zhu, Ruohui Wang, Chen Change Loy, Xiaogang Wang, Xiaoou Tang:
DeepID-Net: multi-stage and deformable deep convolutional neural networks for object detection. CoRR abs/1409.3505 (2014) - [i2]Yonglong Tian, Ping Luo, Xiaogang Wang, Xiaoou Tang:
Pedestrian Detection aided by Deep Learning Semantic Tasks. CoRR abs/1412.0069 (2014) - [i1]Wanli Ouyang, Xiaogang Wang, Xingyu Zeng, Shi Qiu, Ping Luo, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang, Chen Change Loy, Xiaoou Tang:
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection. CoRR abs/1412.5661 (2014)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:10 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint