• Laser & Optoelectronics Progress
  • Vol. 57, Issue 18, 181006 (2020)
Jianjun Li, Yue Sun*, and Baohua Zhang
Author Affiliations
  • School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, Inner Mongolia 014010, China
  • show less
    DOI: 10.3788/LOP57.181006 Cite this Article Set citation alerts
    Jianjun Li, Yue Sun, Baohua Zhang. Interactive Behavior Recognition Based on Sparse Coding Feature Fusion[J]. Laser & Optoelectronics Progress, 2020, 57(18): 181006 Copy Citation Text show less
    References

    [1] Aggarwal J K, Ryoo M S. Human activity analysis: a review[J]. ACM Computing Surveys, 43, 16(2011).

    [2] Chen C H, Zhang J, Liu F. Sparse representation method for human interaction[J]. Pattern Recognition and Artificial Intelligence, 29, 464-471(2016).

    [3] Xu P C, Liu B Y. Interactive behavior recognition based on image enhancement and deep CNN learning[J]. Communications Technology, 52, 701-706(2019).

    [4] Burghouts G J, Schutte K. Spatio-temporal layout of human actions for improved bag-of-words action detection[J]. Pattern Recognition Letters, 34, 1861-1869(2013).

    [5] Zhang B, Rota P, Conci N et al. Human interaction recognition in the wild: analyzing trajectory clustering from multiple-instance-learning perspective. [C]∥2015 IEEE International Conference on Multimedia and Expo (ICME), June 29-July 3, 2015, Turin, Italy. New York: IEEE, 1-6(2015).

    [6] Kong Y, Jia Y D, Fu Y. Interactive phrases: semantic descriptions for human interaction recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1775-1788(2014).

    [7] Wang J, Zhou S C, Xia L M. Human interaction recognition based on sparse representation of feature covariance matrices[J]. Journal of Central South University, 25, 304-314(2018).

    [8] Ijjina E P, Chalavadi K M. Human action recognition in RGB-D videos using motion sequence information and deep learning[J]. Pattern Recognition, 72, 504-516(2017).

    [9] Xu M M. Study on feature extraction and classification for color texture image[D]. Guangzhou: South China University of Technology(2016).

    [10] Zhang L. Research on texture image feature extraction and classification of based on improved LBP[D]. Harbin: Harbin Engineering University(2019).

    [11] Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 971-987(2002).

    [12] Fan X, Fei S W, Chu Y B. Improved algorithm for image edge extraction based on Canny operator[J]. Automation & Instrumentation, 34, 41-44(2019).

    [13] Canny J. A computational approach to edge detection[J]. IEEE transactions on pattern analysis and machine intelligence, 8, 679-698(1986).

    [14] Liu D Y. Research on detection of abnormal behavior in classroom monitoring video[D]. Chengdu: University of Electronic Science and Technology of China(2018).

    [15] Yang J C, Yu K, Gong Y H et al. Linear spatial pyramid matching using sparse coding for image classification. [C]∥2009 IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, 2009, Miami, FL, USA. New York: IEEE, 1794-1801(2009).

    [16] Sung J, Ponce C, Selman B et al. Unstructured human activity detection from RGBD images. [C]∥2012 IEEE International Conference on Robotics and Automation, May 14-18, 2012, Saint Paul, MN, USA. New York: IEEE, 842-849(2012).

    [17] Taha A, Zayed H H, Khalifa M et al. Skeleton-based human activity recognition for video surveillance[J]. International Journal of Scientific and Engineering Research, 6, 993-1004(2015).

    [18] Wang Y X, Zeng Y, Li X et al. Fusing interactive information and energy features for 3D complicated human activity recognition[J]. Journal of Chinese Computer Systems, 39, 1828-1834(2018).

    [19] Wang J, Liu Z C, Wu Y et al. Mining actionlet ensemble for action recognition with depth cameras. [C]∥2012 IEEE Conference on Computer Vision and Pattern Recognition, June 16-21, 2012, Providence, RI, USA. New York: IEEE, 1290-1297(2012).

    [20] Yang X D, Zhang C Y, Tian Y L. Recognizing actions using depth motion maps-based histograms of oriented gradients. [C]∥Proceedings of the 20th ACM International Conference on Multimedia-MM'12, October, 2012, Nara, Japan. New York: ACM, 1057-1060(2012).

    [21] Oreifej O, Liu Z C. HON4D: histogram of oriented 4D normals for activity recognition from depth sequences. [C]∥2013 IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, 2013, Portland, OR, USA. New York: IEEE, 716-723(2013).

    [22] Ji Y L, Cheng H, Zheng Y L et al. Learning contrastive feature distribution model for interaction recognition[J]. Journal of Visual Communication and Image Representation, 33, 340-349(2015).

    [23] Lin L, Wang K Z, Zuo W M et al. A deep structured model with radius-margin bound for 3D human activity recognition[J]. International Journal of Computer Vision, 118, 256-273(2016).

    [24] Jin Z Z, Cao J T, Ji X F. Research on human interaction recognition algorithm based on multi-source information fusion[J]. Computer Technology and Development, 28, 32-36, 43(2018).

    [25] Li J, Mao X, Chen L et al. Human interaction recognition fusing multiple features of depth sequences[J]. IET Computer Vision, 11, 560-566(2017).

    Jianjun Li, Yue Sun, Baohua Zhang. Interactive Behavior Recognition Based on Sparse Coding Feature Fusion[J]. Laser & Optoelectronics Progress, 2020, 57(18): 181006
    Download Citation