• Opto-Electronic Engineering
  • Vol. 45, Issue 12, 180206 (2018)
Guo Zhicheng1、2、*, Dang Jianwu1、2, Wang Yangping1、2, and Jin Jing1、2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.12086/oee.2018.180206 Cite this Article
    Guo Zhicheng, Dang Jianwu, Wang Yangping, Jin Jing. Background modeling method based on multi-feature fusion[J]. Opto-Electronic Engineering, 2018, 45(12): 180206 Copy Citation Text show less
    References

    [1] Ueng S K, Chen G Z. Vision based multi-user human computer interaction[J]. Multimedia Tools and Applications, 2016, 75(16): 10059–10076.

    [2] Liu X, Chen Y. Target tracking based on adaptive fusion of multi-feature[J]. Opto-Electronic Engineering, 2016, 43(3): 58–65.

    [3] Piccardi M. Background subtraction techniques: a review[ C]//Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, Netherlands, 2004: 3099–3104.

    [4] Lipton A J, Fujiyoshi H, Patil R S. Moving target classification and tracking from real-time video[C]//Proceedings of the 4th IEEE Workshop on Applications of Computer Vision. WACV'98, Princeton, NJ, USA, 1998: 8–14.

    [5] Barron J L, Fleet D J, Beauchemin S. Performance of optical flow techniques[J]. International Journal of Computer Vision, 1994, 12(1): 43–77.

    [6] Dikmen M, Huang T S. Robust estimation of foreground in surveillance videos by sparse error estimation[C]//Proceedings of the 19th International Conference on Pattern Recognition, Tampa, USA, 2008: 1–4.

    [7] Xue G J, Song L, Sun J, et al. Foreground estimation based on robust linear regression model[C]//Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 2011: 3269–3272.

    [8] Xue G J, Song L, Sun J. Foreground estimation based on linear regression model with fused sparsity on outliers[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(8): 1346–1357.

    [9] Zhang J M, Wang B. Moving object detection under condition of fast illumination change[J]. Opto-Electronic Engineering, 2016, 43(2): 14–21.

    [10] Li F, Zhang X H, Zhao C Q, et al. Vehicle detection research based on adaptive SILTP algorithm[J]. Computer Science, 2016, 43(6): 294–297.

    [11] Wang Y Z, Liang Y, Pan Q, et al. Spatiotemporal background modeling based on adaptive mixture of Gaussians[J]. Acta Automatica Sinica, 2009, 35(4): 371–378.

    [12] Fan W C, Li X Y, Wei K, et al. Moving target detection based on improved Gaussian mixture model[J]. Computer Science, 2015, 42(5): 286–288, 319.

    [13] Huo D H, Yang D, Zhang X H, et al. Principal component analysis based Codebook background modeling algorithm[J]. Acta Automatica Sinica, 2012, 38(4): 591–600.

    [14] Barnich O, van Droogenbroeck M. ViBe: a universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011, 20(6): 1709–1724.

    [15] Zhang Z B, Yuan X B. An improved PBAS algorithm for dynamic background[J]. Electronic Design Engineering, 2017, 25(3): 35–40.

    [16] Wang Y, Jodoin P M, Porikli F, et al. CDnet 2014: an expanded change detection benchmark dataset[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014: 393–400.

    Guo Zhicheng, Dang Jianwu, Wang Yangping, Jin Jing. Background modeling method based on multi-feature fusion[J]. Opto-Electronic Engineering, 2018, 45(12): 180206
    Download Citation