• Laser & Optoelectronics Progress
  • Vol. 59, Issue 18, 1810010 (2022)
Yuehua Yu, Haibo Zhang, Xin Li, Jiaojiao Kou, Kang Li, Guohua Geng, and Mingquan Zhou*
Author Affiliations
  • College of Information Science and Technology, Northwest University, Xi’an 710127, Shaanxi . China
  • show less
    DOI: 10.3788/LOP202259.1810010 Cite this Article Set citation alerts
    Yuehua Yu, Haibo Zhang, Xin Li, Jiaojiao Kou, Kang Li, Guohua Geng, Mingquan Zhou. Data Enhanced Depth Classification Model for Terracotta Warriors Fragments[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810010 Copy Citation Text show less
    References

    [1] Tian Jing. Study on the Terracotta Warriors of Qin Mausoleum from the Perspective of World Heritage[J]. Emperor Qin Shihuang Mausoleum Museum, 1-17(2018).

    [2] Li Gang, Tian Y Q, Xiao J Y et al. Review of archaeological discoveries and research of the Qin-Han periods in Shaanxi(2008-2017)[J]. Archaeology and Cultural Relics, 229, 66-110(2018).

    [3] Lan D S. Research on adhesive materials for restoration of painted pottery figurines excavated from Pit No.1 of the Qin Terracotta Warriors and Horses[J]. Sciences of Conservation and Archaeology, 31, 49-59(2019).

    [4] Tang T. Research on the application of three-dimensional technology in material cultural heritage “restoration”[D](2018).

    [5] Zhao S Z, Hou M L, Li A Q et al. A classification technology of terra-cotta warriors’ fragments with multiple features[J]. Geomatics World, 26, 14-21(2019).

    [6] Li W M. Design and implementation of virtual restoration system for damaged cultural relics[D](2020).

    [7] Wei Y. Research on image classification and retrieval technology of terra-cotta warriors fragments based on multi-feature[D](2018).

    [8] Kang X Y. Research on method of computer aided 3D models of terra-cotta warriors fragments classification[D](2015).

    [9] Wang N. Research on multi-kernel semi-supervised classification of manifold regularization based on sparse graphs[D](2018).

    [10] He G. Research on semi-supervised collaborative classification of terra-cotta warriors fragments based on graph[D](2019).

    [11] Wang Y Y. Research on the classification algorithm of terracotta warrior fragments based on the optimization model of convolutional neural network[D](2019).

    [12] Wu Z W. Application research of convolution neural network in image classification[D](2015).

    [13] Lecun Y, Bottou L, Bengio Y et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 86, 2278-2324(1998).

    [14] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 60, 84-90(2017).

    [15] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[M]. Fleet D, Pajdla T, Schiele B, et al. Computer vision-ECCV 2014. Lecture notes in computer science, 8689, 818-833(2014).

    [16] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL]. https://arxiv.org/abs/1409.1556

    [17] Szegedy C, Liu W, Jia Y Q et al. Going deeper with convolutions[C](2015).

    [18] He K M, Zhang X Y, Ren S Q et al. Deep residual learning for image recognition[C], 770-778(2016).

    [19] Hu J, Shen L, Albanie S et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42, 2011-2023(2020).

    [20] Huang G, Liu Z, van der Maaten L et al. Densely connected convolutional networks[C], 2261-2269(2017).

    [21] Ma Y J, Liu P P. Convolutional neural network based on DenseNet evolution for image classification algorithm[J]. Laser & Optoelectronics Progress, 57, 241001(2020).

    [22] Zhang X D, Wang T J, Yang Y. Classification of small-sized sample hyperspectral images based on multi-scale residual network[J]. Laser & Optoelectronics Progress, 57, 162801(2020).

    [23] Feng F, Wang S T, Zhang J et al. Hyperspectral images classification based on multi-feature fusion and hybrid convolutional neural networks[J]. Laser & Optoelectronics Progress, 58, 0810010(2021).

    [24] Mei S H, Jiang R Q, Ji J Y et al. Invariant feature extraction for image classification via multi-channel convolutional neural network[C], 491-495(2017).

    [25] Park J, Woo S, Lee J Y et al. BAM: bottleneck attention module[EB/OL]. https://arxiv.org/abs/1807.06514

    [26] Woo S, Park J, Lee J Y et al. CBAM: convolutional block attention module[M]. Ferrari V, Hebert M, Sminchisescu C, et al. Computer Vision-ECCV 2018. Lecture notes in computer science, 11211, 3-19(2018).

    [27] Wang L Q, Chu X L, Qin Z C et al. Hierarchical attention for image captioning with multi-level image representations[J]. Journal of China Academy of Electronics and Information Technology, 15, 63-68(2020).

    [28] Liu J M, Xie W J, Huang H et al. Spatial and channel attention mechanism method for object tracking[J]. Journal of Electronics & Information Technology, 43, 2569-2576(2021).

    [29] Srivastava N, Hinton G E, Krizhevsky A et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 15, 1929-1958(2014).

    [30] Zhang H Y, Cisse M, Dauphin Y N et al. mixup: beyond empirical risk minimization[EB/OL]. https://arxiv.org/abs/1710.09412

    [31] DeVries T, Taylor G W. Improved regularization of convolutional neural networks with cutout[EB/OL]. https://arxiv.org/abs/1708.04552

    [32] Yun S, Han D, Chun S et al. CutMix: regularization strategy to train strong classifiers with localizable features[C], 6022-6031(2019).

    [33] Mirza M, Osindero S. Conditional generative adversarial nets[EB/OL]. https://arxiv.org/abs/1411.1784

    [34] Goodfellow I J, Pouget-Abadie J, Mirza M et al. Generative adversarial networks[J]. Communications of the ACM, 63, 139-144(2020).

    Yuehua Yu, Haibo Zhang, Xin Li, Jiaojiao Kou, Kang Li, Guohua Geng, Mingquan Zhou. Data Enhanced Depth Classification Model for Terracotta Warriors Fragments[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810010
    Download Citation