×
Jun 7, 2022 · In this paper, we aim to leverage the abundant unlabeled data from a target domain to improve the performance of a pre-trained zero-shot ...
Feb 1, 2023 · This paper introduces MUST (MASKED UNSUPERVISED SELF-TRAINING) to improve the performance of CLIP-like models in zero-shot recognition. Based on ...
This paper proposes Masked Unsupervised Self-Training (MUST), a new unsupervised adaptation method which leverages two different and complementary sources ...
Jun 7, 2022 · In this paper, we aim to leverage the abundant unlabeled data to improve the performance of a pre-trained zero-shot classifier on downstream ...
Masked Unsupervised Self-training for Zero-shot Image Classification ... This is the PyTorch code of the MUST paper. The repository supports finetuning a CLIP ...
People also ask
This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre ...
We propose a Self Structural Semantic Alignment (S 3 A ) framework, to address the challenging Realistic Zero-Shot Classification problem.
UNITE first employs self-supervised pre-training to promote discriminative feature learning on target domain videos using a teacher-guided masked distillation ...
Our results emphasize the cooperation between the self-training and our CVPR structural alignment approach. Our approach exhibits superiority over other prompt ...
Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their ...