• Laser & Optoelectronics Progress
  • Vol. 57, Issue 2, 21004 (2020)
Liu Fan* and Yu Fengqin
Author Affiliations
  • School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • show less
    DOI: 10.3788/LOP57.021004 Cite this Article Set citation alerts
    Liu Fan, Yu Fengqin. Human Action Recognition Based on Global and Local Features[J]. Laser & Optoelectronics Progress, 2020, 57(2): 21004 Copy Citation Text show less
    References

    [1] Zhao H Y, Jia B X. Human action recognition using image contour[J]. Computer Science, 42, 312-315(2013).

    [2] Cai J X, Feng G C, Tang X et al. Human action recognition based on local image contour and random forest[J]. Acta Optica Sinica, 34, 1015006(2014).

    [3] Li Y D, Xu X P. Human action recognition by decision-making level fusion based on spatial-temporal features[J]. Acta Optica Sinica, 38, 0810001(2018).

    [4] Zhang Z M, Hu Y Q, Chan S et al. Motion context: a new representation for human action recognition[M]. ∥Forsyth D, Torr P, Zisserman A. Computer vision-ECCV 2008. Lecture notes in computer science. Berlin, Heidelberg: Springer, 5305, 817-829(2008).

    [5] Laptev I. On space-time interest points[J]. International Journal of Computer Vision, 64, 107-123(2005).

    [6] Dollar P, Rabaud V, Cottrell G et al. Behavior recognition via sparse spatio-temporal features. [C]∥2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, October 15-16, 2005, Beijing, China. New York: IEEE, 65-72(2005).

    [7] Yang S, Li S Y, Shao Y Y et al. Building recognition method based on improved HOG feature[J]. Computer Engineering and Applications, 54, 196-200(2018).

    [8] Shao Y H, Guo Y C, Gao C. Infrared human action recognition using dense trajectories-based feature[J]. Journal of Optoelectronics·Laser, 26, 758-763(2015).

    [9] Huang Y W, Wan C L, Feng H. Multi-feature fusion human behavior recognition algorithm based on convolutional neural network and long short term memory neural network[J]. Laser & Optoelectronics Progress, 56, 071505(2019).

    [10] Bay H. Tuytelaars T, van Gool L. SURF: speeded up robust features[M]. ∥Leonardis A, Bischof H, Pinz A. Computer vision-ECCV 2006. Lecture notes in computer science. Berlin, Heidelberg: Springer, 3951, 404-417(2006).

    [11] Freeman W T, Adelson E H. The design and use of steerable filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13, 891-906(1991).

    [12] Dalal N, Triggs B. Histograms of oriented gradients for human detection. [C]∥2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), June 20-25, 2005, San Diego, CA, USA. New York: IEEE, 8588935(2005).

    [13] Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. [C]∥Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., August 26-26, 2004, Cambridge, UK. New York: IEEE, 8380901(2004).

    [14] Soomro K, Zamir A R. Action recognition in realistic sports videos[M]. ∥Moeslund T, Thomas G, Hilton A. Computer vision in sports. Advances in computer vision and pattern recognition. Cham: Springer, 181-208(2014).

    [15] Yun K, Honorio J, Chattopadhyay D et al. Two-person interaction detection using body-pose features and multiple instance learning. [C]∥2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 16-21, 2012, Providence, RI, USA. New York: IEEE, 28-35(2012).

    [16] Ding S T, Qu S R. An improved interest point detector for human action recognition[J]. Journal of Northwestern Polytechnical University, 34, 886-892(2016).

    [17] Lu T R, Yu F Q, Chen Y. A human action recognition method based on LSDA dimension reduction[J]. Computer Engineering, 45, 237-241, 249(2019).

    [18] Cheng H S, Li Q W, Qiu C C et al. Human action recognition algorithm based on improved dense trajectories[J]. Computer Engineering, 42, 199-205(2016).

    [19] Lin B, Fang B, Yang W B et al. Human action recognition based on spatio-temporal three-dimensional scattering transform descriptor and an improved VLAD feature encoding algorithm[J]. Neurocomputing, 348, 145-157(2019).

    [20] Rahmani H, Mian A, Shah M. Learning a deep model for human action recognition from novel viewpoints[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 667-681(2018).

    [21] Tu Z G, Xie W, Qin Q Q et al. Multi-stream CNN: learning representations based on human-related regions for action recognition[J]. Pattern Recognition, 79, 32-43(2018).

    [22] Song S J, Lan C L, Xing J L et al. An end-to-end spatio-temporal attention model for human action recognition from skeleton data. [C]∥Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA. USA: AAAI, 4263-4270(2017).

    [23] Wang H S, Wang L. Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. [C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York: IEEE, 3633-3642(2017).

    [24] Cui R, Hua G, Zhu A C et al. Hard sample mining and learning for skeleton-based human action recognition and identification[J]. IEEE Access, 7, 8245-8257(2019).

    Liu Fan, Yu Fengqin. Human Action Recognition Based on Global and Local Features[J]. Laser & Optoelectronics Progress, 2020, 57(2): 21004
    Download Citation