Action recognition is one of the challenging video understanding tasks in computer vision. Although there has been extensive research in the task of classifying coarse-grained actions, existing methods are still limited in differentiating actions with low inter-class and high intra-class variation. Particularly, the table tennis sport that involves shots of high inter-class similarity, subtle variations, occlusion, and view-point variations. While a few datasets have been available for event spotting and shot recognition, these benchmarks are mostly recorded in a constrained environment with a clear view/perception of shots executed by players. In this paper, we introduce a Table tennis shots 1.0 dataset consisting of 9000 videos of 6 fine-grained actions collected in an unconstrained manner to analyze the performance of both players. To effectively recognise these different types of table tennis shots, we propose an adaptive spatial and temporal aggregation method that can handle the spatial and temporal interactions concerning the subtle variations among shots and low inter-class variations. Our method consists of three components, namely, (i) feature extraction module, (ii) spatial aggregation network, and (iii) temporal aggregation network. The feature extraction module is a 3D convolutional neural network (3D-CNN) that captures the spatial and temporal characteristics of table tennis shots. In order to capture the interaction among the elements of the extracted 3D-CNN feature maps efficiently, we employ spatial aggregation network to obtain the compact spatial representation. Later, we propose to replace the final global average pooling layer (GAP) with the temporal aggregation network to overcome the loss of motion information due to averaging of temporal features. This temporal aggregation network utilizes the attention mechanism of bidirectional encoder representations from Transformers (BERT) to model the significant temporal interactions among the shots effectively. We demonstrate that our proposed approach improves the performance of existing 3D-CNN methods by ~10% on the Table tennis shots 1.0 dataset.We also show the performance of our approach on other action recognition datasets, namely, UCF-101 and HMDB-51.
|