UGaitNet: multimodal gait recognition with missing input modalities
MJ Marín-Jiménez, FM Castro… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
IEEE Transactions on Information Forensics and Security, 2021•ieeexplore.ieee.org
Gait recognition systems typically rely solely on silhouettes for extracting gait signatures.
Nevertheless, these approaches struggle with changes in body shape and dynamic
backgrounds; a problem that can be alleviated by learning from multiple modalities.
However, in many real-life systems some modalities can be missing, and therefore most
existing multimodal frameworks fail to cope with missing modalities. To tackle this problem,
in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to …
Nevertheless, these approaches struggle with changes in body shape and dynamic
backgrounds; a problem that can be alleviated by learning from multiple modalities.
However, in many real-life systems some modalities can be missing, and therefore most
existing multimodal frameworks fail to cope with missing modalities. To tackle this problem,
in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to …
Gait recognition systems typically rely solely on silhouettes for extracting gait signatures. Nevertheless, these approaches struggle with changes in body shape and dynamic backgrounds; a problem that can be alleviated by learning from multiple modalities. However, in many real-life systems some modalities can be missing, and therefore most existing multimodal frameworks fail to cope with missing modalities. To tackle this problem, in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to missing modalities. UGaitNet handles and mingles various types and combinations of input modalities, i.e. pixel gray value, optical flow, depth maps, and silhouettes, while being camera agnostic. We evaluate UGaitNet on two public datasets for gait recognition: CASIA-B and TUM-GAID, and show that it obtains compact and state-of-the-art gait descriptors when leveraging multiple or missing modalities. Finally, we show that UGaitNet with optical flow and grayscale inputs achieves almost perfect (98.9%) recognition accuracy on CASIA-B (same-view “normal”) and 100% on TUM-GAID (“ellapsed time”). Code will be available at https://github.com/avagait/ugaitnet
ieeexplore.ieee.org
Showing the best result for this search. See all results