Amp-net: Appearance-motion prototype network assisted automatic video anomaly detection system

Y Liu, J Liu, K Yang, B Ju, S Liu, Y Wang… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
IEEE Transactions on Industrial Informatics, 2023ieeexplore.ieee.org
As essential tools for industry safety protection, automatic video anomaly detection systems
(AVADS) are designed to detect anomalous events of concern in surveillance videos.
Existing VAD methods lack effective exploration of the prototypical appearance and motion
features leading to poor performance in realistic scenarios. Specifically, they either misreport
regular events as anomalies due to insufficient representation power, or lead to missed
detections with over-power generalization. In this regard, we propose an appearance …
As essential tools for industry safety protection, automatic video anomaly detection systems (AVADS) are designed to detect anomalous events of concern in surveillance videos. Existing VAD methods lack effective exploration of the prototypical appearance and motion features leading to poor performance in realistic scenarios. Specifically, they either misreport regular events as anomalies due to insufficient representation power, or lead to missed detections with over-power generalization. In this regard, we propose an appearance-motion prototype network (AMP-net) that uses external memories to record prototype features and augments the appearance-motion prototype with a spatial-temporal fusion. In addition, AMP-net sequentially fuses appearance features from deep to shallow to utilize multiscale spatial context. Additionally, we introduce temporal attention to capture important dynamics and enhance AMP-net for representing regular motion. The proposed method achieves a delicate balance of effective representation of normal events and limited generalization to anomalies. Experiments on three benchmark datasets demonstrate that our method can accurately detect anomalous events, achieving performance comparable to state-of-the-art methods with frame-level AUCs of 98.7%, 92.4%, and 78.8% on the UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets. Moreover, we conducted a case study on the self-collected industrial dataset, and the results indicate that our AMP-net can cope with complex industrial scenarios and outperform existing methods.
ieeexplore.ieee.org
Showing the best result for this search. See all results