scholar.google.com › citations
Abstract: Self-Supervised Network Projections (SSNP) are dimensionality reduction algorithms that produce low- dimensional layouts from high-dimensional ...
Evaluating Architectures and Hyperparameters of Self-supervised Network Projections · Tim Cech, Daniel Atzberger, +2 authors. J. Döllner · Published in VISIGRAPP ...
Feb 23, 2023 · We report on two experiments on the autoencoder's architecture and hyperparameters, respectively, measuring nine metrics on eight labeled ...
Feb 16, 2023 · Evaluating Architectures and Hyperparameters of Self-supervised Network Projections · Two experiments · Training-related hyperparameters.
ABSTRACT Self-Supervised Network Projections (SSNP) are dimensionality reduction algorithms that produce low-dimensional layouts from high-dimensional data.
No information is available for this page. · Learn why
Oct 28, 2021 · Our empirical evaluation demonstrates that self-supervised representation learning in this domain is able to recover diverse NN model characteristics.
We also evaluate for hyper-parameters prediction tasks, like the activation function (Act), optimizer (Opt), initialization method (Init). Here, we train a ...
Typically, the SSL architecture includes two components: an encoder and a projection head. The encoder is usually a discriminative model like ResNet [He et al., ...
This paper presents an SSRL approach that can be applied to any data modality and network architecture because it does not rely on augmentations or masking.