Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization
Proceedings of the IEEE/CVF International Conference on …, 2023•openaccess.thecvf.com
We propose a self-supervised method for learning motion-focused video representations.
Existing approaches minimize distances between temporally augmented videos, which
maintain high spatial similarity. We instead propose to learn similarities between videos with
identical local motion dynamics but an otherwise different appearance. We do so by adding
synthetic motion trajectories to videos which we refer to as tubelets. By simulating different
tubelet motions and applying transformations, such as scaling and rotation, we introduce …
Existing approaches minimize distances between temporally augmented videos, which
maintain high spatial similarity. We instead propose to learn similarities between videos with
identical local motion dynamics but an otherwise different appearance. We do so by adding
synthetic motion trajectories to videos which we refer to as tubelets. By simulating different
tubelet motions and applying transformations, such as scaling and rotation, we introduce …
Abstract
We propose a self-supervised method for learning motion-focused video representations. Existing approaches minimize distances between temporally augmented videos, which maintain high spatial similarity. We instead propose to learn similarities between videos with identical local motion dynamics but an otherwise different appearance. We do so by adding synthetic motion trajectories to videos which we refer to as tubelets. By simulating different tubelet motions and applying transformations, such as scaling and rotation, we introduce motion patterns beyond what is present in the pretraining data. This allows us to learn a video representation that is remarkably data efficient: our approach maintains performance when using only 25% of the pretraining videos. Experiments on 10 diverse downstream settings demonstrate our competitive performance and generalizability to new domains and fine-grained actions.
openaccess.thecvf.com
Showing the best result for this search. See all results