[5] LECUN Y, BOTTOU L, BENGIO Y, et al.Gradient-based learning applied to document recognition[J].Proceedings of the IEEE, 1998, 86(11):2278-2324.
[6] HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Las Vegas, NV:IEEE, 2016:770-778.
[7] SZEGEDY C, LIU W, JIA Y Q, et al.Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Boston, MA:IEEE, 2015:1-9.
[8] SZEGEDY C, VANHOUCKE V, IOFFE S, et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Las Vegas, NV:IEEE, 2016:2818-2826.
[9] SZEGEDY C, IOFFE S, VANHOUCKE V, et al.Inception-v4, inception-ResNet and the impact of residual connections on learning[EB/OL].(2018-08-23)[2021-08-20].https://arxiv.org/abs/1602.07261.
[10] WOO S H, PARK J C, LEE J Y, et al.CBAM:convolutional block attention module[EB/OL].(2018-07-18)[2021-08-20].https://arxiv.org/abs/1807.06521.
[11] WANG Q L, WU B G, ZHU P F, et al.ECA-Net:efficient channel attention for deep convolutional neural networks[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle, WA:IEEE, 2020:11531-11539.
[12] HU J, SHEN L, ALBANIE S.Squeeze-and-excitation networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8):2011-2023.
[13] ZHAO M H, ZHONG S S, FU X Y, et al.Deep residual shrinkage networks for fault diagnosis[J].IEEE Transactions on Industrial Informatics, 2019, 16(7):4681-4690.
[15] KEYDEL E R, LEE S W, MOORE J.MSTAR extended operating conditions:a tutorial [C]//Proceeding Volume Algorithms for Synthetic Aperture Radar Imagery Ⅲ.Orlando, FL:SPIE, 1996:228-242.