A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends

J Gui, T Chen, J Zhang, Q Cao, Z Sun… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …

Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging

S Albelwi - Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …

DiffCSE: Difference-based contrastive learning for sentence embeddings

YS Chuang, R Dangovski, H Luo, Y Zhang… - arXiv preprint arXiv …, 2022 - arxiv.org
We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence
embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference …

Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods

R Balestriero, Y LeCun - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Self-Supervised Learning (SSL) surmises that inputs and pairwise positive
relationships are enough to learn meaningful representations. Although SSL has recently …

From canonical correlation analysis to self-supervised graph neural networks

H Zhang, Q Wu, J Yan, D Wipf… - Advances in Neural …, 2021 - proceedings.neurips.cc
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …

Efficient training of visual transformers with small datasets

Y Liu, E Sangineto, W Bi, N Sebe… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Visual Transformers (VTs) are emerging as an architectural paradigm alternative to
Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations …

Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank

Q Garrido, R Balestriero, L Najman… - … on machine learning, 2023 - proceedings.mlr.press
Abstract Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid
development, with the emergence of many method variations but only few principled …

Crafting better contrastive views for siamese representation learning

X Peng, K Wang, Z Zhu, M Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recent self-supervised contrastive learning methods greatly benefit from the Siamese
structure that aims at minimizing distances between positive pairs. For high performance …

Learning semi-supervised gaussian mixture models for generalized category discovery

B Zhao, X Wen, K Han - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
In this paper, we address the problem of generalized category discovery (GCD), ie, given a
set of images where part of them are labelled and the rest are not, the task is to automatically …

Understanding dimensional collapse in contrastive self-supervised learning

L Jing, P Vincent, Y LeCun, Y Tian - arXiv preprint arXiv:2110.09348, 2021 - arxiv.org
Self-supervised visual representation learning aims to learn useful representations without
relying on human annotations. Joint embedding approach bases on maximizing the …