A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …
achieve satisfactory performance. However, the process of collecting and labeling such data …
Survey on self-supervised learning: auxiliary pretext tasks and contrastive learning methods in imaging
S Albelwi - Entropy, 2022 - mdpi.com
Although deep learning algorithms have achieved significant progress in a variety of
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) …
DiffCSE: Difference-based contrastive learning for sentence embeddings
We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence
embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference …
embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference …
Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods
R Balestriero, Y LeCun - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Self-Supervised Learning (SSL) surmises that inputs and pairwise positive
relationships are enough to learn meaningful representations. Although SSL has recently …
relationships are enough to learn meaningful representations. Although SSL has recently …
From canonical correlation analysis to self-supervised graph neural networks
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …
learning with graph data. It follows the previous methods that generate two views of an input …
Efficient training of visual transformers with small datasets
Abstract Visual Transformers (VTs) are emerging as an architectural paradigm alternative to
Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations …
Convolutional networks (CNNs). Differently from CNNs, VTs can capture global relations …
Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank
Abstract Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid
development, with the emergence of many method variations but only few principled …
development, with the emergence of many method variations but only few principled …
Crafting better contrastive views for siamese representation learning
Recent self-supervised contrastive learning methods greatly benefit from the Siamese
structure that aims at minimizing distances between positive pairs. For high performance …
structure that aims at minimizing distances between positive pairs. For high performance …
Learning semi-supervised gaussian mixture models for generalized category discovery
In this paper, we address the problem of generalized category discovery (GCD), ie, given a
set of images where part of them are labelled and the rest are not, the task is to automatically …
set of images where part of them are labelled and the rest are not, the task is to automatically …
Understanding dimensional collapse in contrastive self-supervised learning
Self-supervised visual representation learning aims to learn useful representations without
relying on human annotations. Joint embedding approach bases on maximizing the …
relying on human annotations. Joint embedding approach bases on maximizing the …