[1] ZHANG H Y, CISSE M, DAUPHIN Y, et al.Mixup: beyond empirical risk minimization[R].Los Alamos: arXiv Preprint, 2017:arXiv:1710.09412.
[2] DAN H, NORMAN M, EKIN D C, et al.Augmix:a simple data processing method to improve robustness and uncertainty[C]//The 8th International Conference on Learning Representations.Addis Ababa:[s.n.], 2020: 1-15.
[3] HARRIS E, MARCU A, PAINTER M, et al.Understanding and enhancing mixed sample data augmentation[R].Los Alamos: arXiv Preprint, 2020:arXiv:2002.12047.
[4] BOCHKOVSKIY A, WANG C Y, LIAO H Y.YOLOV4:optimal speed and accuracy of object detection[R].Los Alamos: arXiv Preprint, 2020:arXiv:2004.10934.
[5] YUN S, HAN D, OH S J, et al.Cutmix:regularization strategy to train strong classifiers with localizable features[C]//Proceedings of the IEEE International Conference on Computer Vision(ICCV).Seoul:IEEE, 2019:6022-6031.
[6] GOODFELLOW I, POUGET A J, MIRZA M, et al.Generative adversarial nets[C]//Advances in Neural Information Processing Systems.Montreal:NIPS Foundation, 2014:2672-2680.
[7] MAO X, LI Q, XIE H, et al.Least squares generative adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision.Venice:IEEE, 2017:2794-2802.
[8] BROCK A, DONAHUE J, SIMONYAN K.Large scale GAN training for high fidelity natural image synthesis[R].Los Alamos: arXiv Preprint, 2018:arXiv:1809.11096.
[9] SHAHAM T R, DEKEL T, MICHAELI T.SinGAN:learning a generative model from a single natural image[C]//Proceedings of the IEEE International Conference on Computer Vision.Seoul:IEEE, 2019:4569-4579.
[10] ISOLA P, ZHU J Y, EFROS A A.Image-to-image translation with conditional adversarial networks[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu:IEEE, 2017:5967-5976.
[11] ZHU J, PARK T, ISOLA P, et al.Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision(ICCV).Venice:IEEE, 2017:2242-2251.
[12] KIM T, CHA M, KIM H, et al.Learning to discover cross-domain relations with generative adversarial networks[C]//Proceedings of the 34th International Conference on Machine Learning.Sydney:[s.n.], 2017:1857-1865.
[13] YI Z, ZHANG H, TAN P, et al.DualGAN:unsupervised dual learning for image-to-image translation[C]//IEEE International Conference on Computer Vision.Venice:IEEE, 2017:2868-2876.
[14] CHOI Y, CHOI M, KIM M, et al.StarGAN:unified generative adversarial networks for multi-domain image-to-image translation[C]//IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE, 2018:8789-8797.
[17] REN S Q, HE K M, GIRSHICK R, et al.Faster R-CNN:towards real-time object detection with region proposal networks[C]//Advances in Neural Information Processing Systems.Montreal:[s.n.], 2015:91-99.
[18] DUTA I C, LIU L, ZHU F, et al.Pyramidal convolution:rethinking convolutional neural networks for visual recognition[R].Los Alamos: arXiv Preprint, 2020: arXiv:2006.11538.
[20] EVERINGHAM M, VAN G L, WILLIAMS C K I, et al.The pascal visual object classes(VOC) challenge[J].International Journal of Computer Vision, 2010, 88(2):303-338.
[22] ZHANG L, ZHANG L, MOU X Q, et al.FSIM:a feature similarity index for image quality assessment[J].IEEE Transactions on Image Processing, 2011, 20(8):2378-2386.
[23] HORE A, ZIOU D.Image quality metrics:PSNR vs.SSIM[C]//The 20th International Conference on Pattern Recognition.Istanbul:IEEE, 2010:2366-2369.