• Infrared Technology
  • Vol. 46, Issue 5, 608 (2024)
Xiwen DING1, Hongchang CHENG1,2,*, Yue SU1, Lei YAN1,2..., Ye YANG1,2 and Xiaogang DANG1,2|Show fewer author(s)
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: Cite this Article
    DING Xiwen, CHENG Hongchang, SU Yue, YAN Lei, YANG Ye, DANG Xiaogang. DCGAN-Based Generation of Ultraviolet Image Intensifier Field-of-View Defect Images[J]. Infrared Technology, 2024, 46(5): 608 Copy Citation Text show less
    References

    [1] TAN Zhi. Target Detection and Recognition Technology Based on Deep Learning[M]. Beijing: Chemical Industry Press, 2021.

    [2] GONG Jiulu, CHEN Derong, WANG Zepeng. Target Detection and Recognition Technology[M]. Beijing: Beijing Institute of Technology Press, 2022.

    [3] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.

    [4] ZHANG Zhuo, LEI Yan, MAO Xiaoguang, et al. Data augmentation method of defect location model domain based on adversarial generative network[J/OL]. Journal of Software: 1-18 [2023-10-29], http://www.jos. org.cn/jos/article/abstract/6961?st=search.

    [5] YUAN Peisen, WU Maosheng, ZHAI Zhaoyu, et al. Study on phenotypic data generation of mushroom based on GAN network[J]. Journal of Agricultural Machinery, 2019, 50(12): 231-239.

    [6] DOMAN K, KONISHI T, MEKADA Y. Lesion image synthesis using DCGANs for metastatic liver cancer detection[J]. Adv Exp Med Biol., 2020, 1213: 95-106.

    [7] CHEN Hao. Research on Quantitative Stock Selection Strategy Based on Generative Adversarial Network GAN[D]. Guangzhou: Guangzhou University, 2023.

    [8] HUANG Yueyue. Research on Underwater Image Enhancement Method Based on GAN Network [D]. Xi'an: Shaanxi University of Science and Technology, 2023.

    [9] LIN Benwang. Research on Facial Expression Generation Method Based on Generative Adversarial Networks[D]. Beijing: Beijing Jianzhu University, 2023.

    [10] YE Na. Research on Cross-Modal Perception Technology of Robots Based on Generative Adversarial Networks[D]. Nanchang: Nanchang University, 2023.

    [11] Mirza M, Osindero S. Conditional generative adversarial nets[J]. arXiv preprint arXiv:1411.1784, 2014.

    [12] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv preprint arXiv:1511.06434, 2015.

    [13] WU Haosheng, JIANG Pei, WANG Zuoxue, et al. Mineral flotation purity prediction based on Wasserstein GAN data enhancement[J/OL]. Journal of Chongqing University: 1-12. [2023-10-29]. http://kns.cnki.net/kcms/ detail/ 50.1044.N.20230523.1159.002.html.

    [14] WANG Yumeng, SUN Changhai, ZHAO Shuchun, et al. Partial discharge defect identification of cable intermediate joints based on improved Wasserstein generative adversarial network and deep residual network [J]. Science and Technology and Engineering, 2022, 22(35): 15650-15658.

    [15] Woo S, Park J, Lee J Y, et al. Cbam: Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 3-19.

    [16] Christou C, Eliophotou-Menon M, Philippou G. Teachers' concerns regarding the adoption of a new mathematics curriculum: an application of CBAM[J]. Educational Studies in Mathematics, 2004, 57: 157-176.

    [17] WU Lijun, CHEN Shidong, CHEN Zhicong. Abnormal behavior detection based on attention-generative adversarial networks[J]. Microelectronics and Computers, 2022, 39(8): 31-38.

    [18] YANG Qi. Research on Defect Detection Technology of Ultraviolet Image Intensifier[D]. Nanjing: Nanjing University of Science and Technology, 2011.

    [19] ZHAO Qingbo. Research on Radiation Gain and Field Defect Test Technology of Wide Spectrum Image Intensifier[D]. Nanjing: Nanjing University of Science and Technology, 2008.

    [20] WANG Jihui, JIN Weiqi, WANG Xia, et al. Flaw inspection method for image tube based on image processing[J]. Optical Technology, 2005(3): 463-464, 467.

    [21] XU Zhengguang, WANG Xia, WANG Jihui, et al. Research of an approach to detect field defects of image intensifier[J]. Application Optics, 2005(3): 12-15.

    [22] WANG Kunfeng, GOU Chao, DUAN Yanjie, et al. Research progress and prospect of generative adversarial network GAN[J]. Acta Automatica Sinica, 2017, 43(3): 321-332.

    [23] CHEN Xinyu. Research on Image Generation Method Based on Generative Adversarial Networks[D]. Xiangtan: Xiangtan University, 2020.

    [24] WU Xiaoyan, QIAN Zhenkun. A face recovery method based on deep convolutional generative adversarial networks[J]. Computer Applications and Software, 2020, 37(8): 207-212.

    [25] ZHU Xianshen. Research on Image Application Under Wasserstein Distance[D]. Kunming: Yunnan Normal University, 2023.

    [26] CAI Zihao, JIANG Yi, ZHANG Laiping. An evaluation method of grid quality based on convolutional attention network[J]. Journal of Sichuan University (Natural Science Edition), 2023, 60(5): 139-148.

    [27] ZHAO Yaqin, SONG Yuqing, WU Han, et al. High-precision gesture recognition based on DenseNet and convolutional attention module[J]. Journal of Electronics and Informatics, 2024, 46(3): 967-976.

    [28] Shmelkov K, Schmid C, Alahari K. How good is my GAN?[C]// Proceedings of the European Conference on Computer Vision (ECCV). 2018: 213-229.

    [29] Korhonen J, You J. Peak signal-to-noise ratio revisited: Is simple beautiful[C]//Fourth International Workshop on Quality of Multimedia Experience. IEEE, 2012: 37-38.

    [30] WANG Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.

    [31] ZHANG R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 586-595.

    DING Xiwen, CHENG Hongchang, SU Yue, YAN Lei, YANG Ye, DANG Xiaogang. DCGAN-Based Generation of Ultraviolet Image Intensifier Field-of-View Defect Images[J]. Infrared Technology, 2024, 46(5): 608
    Download Citation