• Laser & Optoelectronics Progress
  • Vol. 57, Issue 24, 241003 (2020)
Deyong Gao1、2, Zibing Kang1、*, Song Wang1、2, and Yangping Wang1、3
Author Affiliations
  • 1School of Electronic & Information Engineering, Lanzhou Jiaotong University, Lanzhou, Gansu 730070, China;
  • 2Gansu Provincial Engineering Research Center for Artificial Intelligence and Graphic & Image Processing, Lanzhou, Gansu 730070, China;
  • 3Gansu Provincial Key Laboratory of System Dynamics and Reliability of Rail Transport Equipment, Lanzhou, Gansu 730070, China
  • show less
    DOI: 10.3788/LOP57.241003 Cite this Article Set citation alerts
    Deyong Gao, Zibing Kang, Song Wang, Yangping Wang. Human-Body Action Recognition Based on Dense Trajectories and Video Saliency[J]. Laser & Optoelectronics Progress, 2020, 57(24): 241003 Copy Citation Text show less
    References

    [1] Luo H L, Wang C J, Lu F. Survey of video behavior recognition[J]. Journal on Communications, 39, 169-180(2018).

    [2] Bobick A F, Davis J W. The recognition of human movement using temporal templates[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 257-267(2001).

    [3] Laptev I. On space-time interest points[J]. International Journal of Computer Vision, 64, 107-123(2005).

    [4] Dollar P, Rabaud V, Cottrell G et al. Behavior recognition via sparse spatio-temporal features[C]∥2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, October 15-16, 2005, Beijin, 65-72(2005).

    [5] Wang H, Kläser A, Schmid C et al. Dense trajectories and motion boundary descriptors for action recognition[J]. International Journal of Computer Vision, 103, 60-79(2013).

    [6] Wang H, Schmid C. Action recognition with improved trajectories[C]∥2013 IEEE International Conference on Computer Vision, December 1-8, 2013, Sydney, NSW, Australia., 3551-3558(2013).

    [7] Liu X, Zhao G Y, Yao J W et al. Background subtraction based on low-rank and structured sparse decomposition[J]. IEEE Transactions on Image Processing, 24, 2502-2514(2015).

    [8] Souly N, Shah M. Visual saliency detection using group lasso regularization in videos of natural scenes[J]. International Journal of Computer Vision, 117, 93-110(2016). http://dl.acm.org/citation.cfm?id=2897324

    [9] Li Y D, Xu X P. Video saliency detection method based on spatiotemporal features of superpixels[J]. Acta Optica Sinica, 39, 0110001(2019).

    [10] Duan L J, Xi T, Cui S et al. A spatiotemporal weighted dissimilarity-based method for video saliency detection[J]. Signal Processing: Image Communication, 38, 45-56(2015).

    [11] Li Q W, Zhou Y Q, Ma Y P et al. Salient object detection method based on binocular vision[J]. Acta Optica Sinica, 38, 0315002(2018).

    [12] Wang L, Zhao D B. Recognizing actions using salient features[C]∥2011 IEEE 13th International Workshop on Multimedia Signal Processing, October 17-19, 2011, Hangzhou, China., 1-6(2011).

    [13] Yi Y, Lin Y K. Human action recognition with salient trajectories[J]. Signal Processing, 93, 2932-2941(2013).

    [14] Somasundaram G, Cherian A, Morellas V et al. Action recognition using global spatio-temporal features derived from sparse representations[J]. Computer Vision and Image Understanding, 123, 1-13(2014).

    [15] Li Q, Cheng H, Zhou Y et al. Human action recognition using improved salient dense trajectories[J]. Computational Intelligence and Neuroscience, 2016, 6750459(2016).

    [16] Yi Y, Zheng Z X, Lin M Q. Realistic action recognition with salient foreground trajectories[J]. Expert Systems With Applications, 75, 44-55(2017).

    [17] Wang X F, Qi C. Saliency-based dense trajectories for action recognition using low-rank matrix decomposition[J]. Journal of Visual Communication and Image Representation, 41, 361-374(2016).

    [18] Wang X F, Qi C, Lin F. Combined trajectories for action recognition based on saliency detection and motion boundary[J]. Signal Processing: Image Communication, 57, 91-102(2017).

    [19] Rodriguez M D, Ahmed J, Shah M. Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition[C]∥2008 IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, 2008, Anchorage, AK, USA., 1-8(2008).

    [20] Liu J G, Luo J B, Shah M. Recognizing realistic actions from videos “in the wild”[C]∥2009 IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, 2009, Miami, FL, USA., 1996-2003(2009).

    [21] Cho J, Lee M, Chang H J et al. Robust action recognition using local motion and group sparsity[J]. Pattern Recognition, 47, 1813-1825(2014).

    [22] Yang X D, Tian Y L. Action recognition using super sparse coding vector with spatio-temporal awareness. [C]∥ Fleet D, Pajdla T, Schiele B, et al. Computer Vision-ECCV 2014. Cham: Springer, 727-741(2014).

    [23] Peng X J, Qiao Y, Peng Q. Motion boundary based sampling and 3D co-occurrence descriptors for action recognition[J]. Image and Vision Computing, 32, 616-628(2014).

    [24] Guo Y N, Ma W, Duan L J et al. Human action recognition based on discriminative supervoxels[C]∥2016 International Joint Conference on Neural Networks (IJCNN), July 24-29, 2016, Vancouver, BC, Canada., 3863-3869(2016).

    [25] Duan L J, Guo Y N, Qiao Y H et al. Human action recognition based on extracted discriminative regions[J]. Journal of Beijing University of Technology, 43, 1480-1487(2017).

    [26] Wang L M, Qiao Y, Tang X O. Action recognition with trajectory-pooled deep-convolutional descriptors[C]∥2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA., 4305-4314(2015).

    [27] Li Q H, Li A H, Wang T et al. Double-stream convolutional networks with sequential optical flow image for action recognition[J]. Acta Optica Sinica, 38, 0615002(2018).

    Deyong Gao, Zibing Kang, Song Wang, Yangping Wang. Human-Body Action Recognition Based on Dense Trajectories and Video Saliency[J]. Laser & Optoelectronics Progress, 2020, 57(24): 241003
    Download Citation