• Laser & Optoelectronics Progress
  • Vol. 55, Issue 1, 11010 (2018)
Zan Baofeng1、*, Kong Jun1、2, and Jiang Min1
Author Affiliations
  • 1School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • 2College of Electrical Engineering, Xinjiang University, Urumqi, Xinjiang 830047, China
  • show less
    DOI: 10.3788/LOP55.011010 Cite this Article Set citation alerts
    Zan Baofeng, Kong Jun, Jiang Min. Human Action Recognition Based on Discriminative Collaborative Representation Classifier[J]. Laser & Optoelectronics Progress, 2018, 55(1): 11010 Copy Citation Text show less
    References

    [1] Chen C, Jafari R, Kehtarnavaz N. Improving human action recognition using fusion of depth camera and inertial sensors[J]. IEEE Transactions on Human-Machine Systems, 45, 51-61(2015). http://ieeexplore.ieee.org/document/6934998/

    [2] Chen C, Kehtarnavaz N, Jafari R. A medication adherence monitoring system for pill bottles based on a wearable inertial sensor. [C]// Proceedings of the 36th International Conference of the IEEE Engineering in Medicine and Biology Society, 4135-4138(2014).

    [3] Zhang X G, Liu C X, Zuo J Q. Small scale crowd behavior recognition based on causality network analysis[J]. Acta Optica Sinica, 38, 0815001(2015).

    [4] Cai J X, Feng G C, Tang X et al. Human action recognition based on local image contour and random forest[J]. Acta Optica Sinica, 34, 1015006(2014).

    [5] Cai J X, Feng G C, Tang X et al. Human action recognition by learning pose dictionary[J]. Acta Optica Sinica, 34, 1215002(2014).

    [6] Yang X, Zhang C, Tian Y. Recognizing actions using depth motion maps-based histograms of oriented gradients. [C]// Proceedings of the 20th ACM International Conference on Multimedia, 1057-1060(2012).

    [7] Chen C, Liu K, Kehtarnavaz N. Real-time human action recognition based on depth motion maps[J]. Journal of Real-Time Image Processing, 12, 155-163(2013). http://link.springer.com/article/10.1007/s11554-013-0370-1

    [8] Wright J, Yang A Y, Sastry S S. et al. Robust face recognition via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 210-227(2009). http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber=4483511&matchboolean%3dtrue%26querytext%3dj.%2bwright%2c%2ba.%2by.%2byang%2c%2ba.%2bganesh%2c%2bs.%2bs.%2bsastry%2c%2band%2by.%2bma.

    [9] Zhang L, Yang M, Feng X. Sparse representation or collaborative representation: which helps face recognition[C]. IEEE International Conference on Computer Vision, IEEE, 471-478(2012).

    [10] Akhtar N, Shafait F, Mian A. Sparseness helps: sparsity augmented collaborative representation for classification[C]. IEEE Conference on Computer Vision and Pattern Recognition, 1-10(2015).

    [11] Zhang H, Wang F, Chen Y et al. Sample pair based sparse representation classification for face recognition[J]. Expert Systems with Applications, 45, 352-358(2016). http://www.sciencedirect.com/science/article/pii/S0957417415006879

    [12] Cui M, Prasad S. Class-dependent sparse representation classifier for robust hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 53, 2683-2695(2015). http://ieeexplore.ieee.org/document/6957565/

    [13] Li W, Zhang Z, Liu Z. Action recognition based on a bag of 3D points[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 9-14(2010).

    [14] Vieira 1 A W, Nascimento E R, Oliveira1 G L et al. . Stop: space-time occupancy patterns for 3D action recognition from depth map sequences[M]. Heidelberg: Springer, 252-259(2012).

    [15] Yang X, Tian Y. Eigen joints-based action recognition using naive-bayes-nearest-neighbor[C]. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 14-19(2012).

    [16] Oreifej O, Liu Z. HON4D: histogram of oriented 4D normals for activity recognition from depth sequences[C]. IEEE Conference on Computer Vision and Pattern Recognition, 716-723(2013).

    [17] Rahmani H, Mahmood A, Huynh D Q et al. Real time action recognition using histograms of depth gradients and random decision forests[C]. Applications of Computer Vision, IEEE, 626-633(2014).

    [18] Zhang C, Tian Y. Histogram of 3D facets: a depth descriptor for human action and hand gesture recognition[J]. Computer Vision and Image Understanding, 139, 29-39(2015). http://www.sciencedirect.com/science/article/pii/S1077314215001216

    [19] Evangelidis G, Singh G, Horaud R. Skeletal quads: human action recognition using joint quadruples[C]. International Conference on Pattern Recognition, 4513-4518(2014).

    [20] Chen C, Jafari R, Kehtarnavaz N. UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor[C]. IEEE International Conference on Image Processing, 168-172(2015).

    Zan Baofeng, Kong Jun, Jiang Min. Human Action Recognition Based on Discriminative Collaborative Representation Classifier[J]. Laser & Optoelectronics Progress, 2018, 55(1): 11010
    Download Citation