Gait recognition based on Orthogonal view feature extraction

Q Fang, N Ying, H Chen, M Hu, Q Shu, J Zhao… - Multimedia Tools and …, 2024 - Springer
Q Fang, N Ying, H Chen, M Hu, Q Shu, J Zhao, X Zhang
Multimedia Tools and Applications, 2024Springer
Gait is used for personal identification but is sensitive to covariates such as views and
walking conditions. To reduce the influence of views on the accuracy of gait recognition, this
paper proposes an Orthogonal-view Feature Decomposition Network based on GaitSet
(OFD-GaitSet). The algorithm regards gait recognition as two orthogonal view components
of gait recognition. Firstly, the algorithm improves the setting of the gait gallery so that each
sample in the gallery contains gait information with two views: 0° and 90°; Secondly, the …
Abstract
Gait is used for personal identification but is sensitive to covariates such as views and walking conditions. To reduce the influence of views on the accuracy of gait recognition, this paper proposes an Orthogonal-view Feature Decomposition Network based on GaitSet (OFD-GaitSet). The algorithm regards gait recognition as two orthogonal view components of gait recognition. Firstly, the algorithm improves the setting of the gait gallery so that each sample in the gallery contains gait information with two views: 0° and 90°; Secondly, the algorithm designs two Feature Extraction Networks, which extract the gait sub-features of the gait silhouettes sequence from two views. At the same time, the View Identification Network and Distance Block are used to weight the Euclidean Distance between the gait sub-features and the gallery’s, and the recognition results are obtained through comparison. This algorithm uses Cross Entropy Loss and improved Triplet Loss for training. Experiments on the CASIA-B dataset show that the average Raank-1 accuracy reaches 99.8% under normal walking (NM) conditions, 99.1% under walking with bag (BG) conditions, and 88.2% under wearing coat or jacket (CL) conditions. Compared with GaitSet, it improves by 4.8%, 11.9%, and 17.8%, respectively; Experiments on the OU-MVLP dataset have achieved a Rank-1 accuracy of 89.8%, which is 2.7% higher than the GaitSet.
Springer
Showing the best result for this search. See all results