• Opto-Electronic Engineering
  • Vol. 51, Issue 5, 240034 (2024)
Hongmin Zhang*, Dingding Yan, and Qianqian Tian
Author Affiliations
  • School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing 400054, China
  • show less
    DOI: 10.12086/oee.2024.240034 Cite this Article
    Hongmin Zhang, Dingding Yan, Qianqian Tian. Improved spatio-temporal graph convolutional networks for video anomaly detection[J]. Opto-Electronic Engineering, 2024, 51(5): 240034 Copy Citation Text show less

    Abstract

    An improved spatio-temporal graph convolutional network for video anomaly detection is proposed to accurately capture the spatio-temporal interactions of objects in anomalous events. The graph convolutional network integrates conditional random fields, effectively modeling the interactions between spatio-temporal features across frames and capturing their contextual relationship by exploiting inter-frame feature correlations. Based on this, a spatial similarity graph and a temporal dependency graph are constructed with video segments as nodes, facilitating the adaptive fusion of the two to learn video spatio-temporal features, thus improving the detection accuracy. Experiments were conducted on three video anomaly event datasets, UCSD Ped2, ShanghaiTech, and IITB-Corridor, yielding frame-level AUC values of 97.7%, 90.4%, and 86.0%, respectively, and achieving accuracy rates of 96.5%, 88.6%, and 88.0%, respectively.
    Hongmin Zhang, Dingding Yan, Qianqian Tian. Improved spatio-temporal graph convolutional networks for video anomaly detection[J]. Opto-Electronic Engineering, 2024, 51(5): 240034
    Download Citation