Equivariance versus augmentation for spherical images

J Gerken, O Carlsson, H Linander… - International …, 2022 - proceedings.mlr.press
International Conference on Machine Learning, 2022proceedings.mlr.press
We analyze the role of rotational equivariance in convolutional neural networks (CNNs)
applied to spherical images. We compare the performance of the group equivariant
networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing
amount of data augmentation. The chosen architectures can be considered baseline
references for the respective design paradigms. Our models are trained and evaluated on
single or multiple items from the MNIST-or FashionMNIST dataset projected onto the sphere …
Abstract
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation. The chosen architectures can be considered baseline references for the respective design paradigms. Our models are trained and evaluated on single or multiple items from the MNIST-or FashionMNIST dataset projected onto the sphere. For the task of image classification, which is inherently rotationally invariant, we find that by considerably increasing the amount of data augmentation and the size of the networks, it is possible for the standard CNNs to reach at least the same performance as the equivariant network. In contrast, for the inherently equivariant task of semantic segmentation, the non-equivariant networks are consistently outperformed by the equivariant networks with significantly fewer parameters. We also analyze and compare the inference latency and training times of the different networks, enabling detailed tradeoff considerations between equivariant architectures and data augmentation for practical problems.
proceedings.mlr.press
Showing the best result for this search. See all results