×
Jan 23, 2023 · In this paper, we propose a simple self-supervised distillation technique that can train high performance low-compute neural networks.
In this paper, we propose a simple self-supervised distillation technique that can train high performance low-compute neural networks. Our main insight is that ...
Self-supervised methods in vision have been mostly focused on large architectures as they seem to suffer from a significant performance drop for smaller ...
A Simple Recipe for Competitive Low-compute Self supervised Vision Models · no code implementations • 23 Jan 2023 • Quentin Duval, Ishan Misra, Nicolas Ballas.
In this paper, we propose a simple self-supervised distillation technique that can train high performance low-compute neural networks. Our main insight is that ...
A Simple Recipe for Competitive Low-compute Self supervised Vision Models · pdf icon · hmtl icon · Quentin Duval, Ishan Misra, Nicolas Ballas. Published: 31 Dec ...
Jan 23, 2023 · In this work, we will introduce a self-supervised distil- lation method and show it outperforms those previous ap- proaches for training low- ...
This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations.
A Simple Recipe for Competitive Low-compute Self supervised Vision Models · Quentin Duval, Ishan Misra, Nicolas Ballas. Published: 31 Dec 2022, Last Modified ...