Contrastive Touch-to-Touch Pretraining

S Rodriguez, Y Dou, W Bogert, M Oller, K So… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2410.11834, 2024arxiv.org
Today's tactile sensors have a variety of different designs, making it challenging to develop
general-purpose methods for processing touch signals. In this paper, we learn a unified
representation that captures the shared information between different tactile sensors. Unlike
current approaches that focus on reconstruction or task-specific supervision, we leverage
contrastive learning to integrate tactile signals from two different sensors into a shared
embedding space, using a dataset in which the same objects are probed with multiple …
Today's tactile sensors have a variety of different designs, making it challenging to develop general-purpose methods for processing touch signals. In this paper, we learn a unified representation that captures the shared information between different tactile sensors. Unlike current approaches that focus on reconstruction or task-specific supervision, we leverage contrastive learning to integrate tactile signals from two different sensors into a shared embedding space, using a dataset in which the same objects are probed with multiple sensors. We apply this approach to paired touch signals from GelSlim and Soft Bubble sensors. We show that our learned features provide strong pretraining for downstream pose estimation and classification tasks. We also show that our embedding enables models trained using one touch sensor to be deployed using another without additional training. Project details can be found at https://www.mmintlab.com/research/cttp/.
arxiv.org
Showing the best result for this search. See all results