[1] Y HE, Q CHEN, H WANG. Research progress of laser thermography non-destructive testing (invited). Infrared and Laser Engineering, 53, 20240144(2024).
[2] Y HE, B DENG, H WANG et al. Infrared machine vision and infrared thermography with deep learning: A review. Infrared Physics & Technology, 116, 103754(2021).
[3] M A ÖZKANOĞLU, S OZER. InfraGAN: A GAN architecture to transfer visible images to infrared domain. Pattern Recognition Letters, 155, 69-76(2022).
[4] KNIAZ V V, KNYAZ V A, HLADUVKA J, et al. ThermalGAN: Multimodal coltothermal image translation f person reidentification in multispectral dataset[C]Proceedings of the European conference on computer vision (ECCV) wkshops, 2018, 11134: 606624.
[5] J HAO, L WANG. Research on circuit board fault diagnosis based on infrared temperature series. Infrared and Laser Engineering, 52, 20220492(2023).
[6] A CRESWELL, T WHITE, V DUMOULIN et al. Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35, 53-65(2018).
[7] I GOODFELLOW, J POUGET-ABADIE, M MIRZA et al. Generative adversarial networks. Communications of the ACM, 63, 139-144(2020).
[8] B LI, Y XIAN, D ZHANG. Infrared image generation algorithm based on conditional generation adversarial networks. Acta Photonica Sinica, 50, 1110004(2021).
[9] D MA, Y XIE, J SU. Visible-to-infrared image translation based on an improved conditional generative adversarial nets. Acta Photonica Sinica, 52, 0410003(2023).
[10] ZHU J, PARK T, ISOLA P, et al. Unpaired imagetoimage translation using cycleconsistent adversarial wks [C]Proceedings of the IEEE international conference on computer vision, 2017: 22232232.
[11] HAN D, YE T, HAN Y, et al. Agent attention: On the integration of softmax linear attention[Z]. Ithaca: Cnell University Library, arXiv. g, 2024.
[12] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]NIPS''17: Proceedings of the 31st International Conference on Neural Infmation Processing Systems, 2017: 60006010.
[13] Z NIU, G ZHONG, H YU. A review on the attention mechanism of deep learning. Neurocomputing, 452, 48-62(2021).
[14] HE K, ZHANG X, REN S, et al. Deep residual learning f image recognition[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2016: 770778.
[15] U DEMIR, G B ÜNAL. Patch-based image inpainting with generative adversarial networks. arXiv, 1803, 07422(2018).
[16] X GUO, Y WANG, T DU et al. Contranorm: A contrastive learning perspective on oversmoothing and beyond. arXiv, 2303, 06562(2023).
[17] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric [C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, 2018: 586595.
[18] X LUO, X CHANG, X BAN. Regression and classification using extreme learning machine based on L1-norm and L2-norm. Neurocomputing, 174, 179-186(2016).
[19] B HOPKINS, L O'NEILL, F AFGHAH et al. FLAME 2: Fire detection and modeling: Aerial multi-spectral image dataset.
[20] J W DAVIS, V SHARMA. Background-subtraction using contour-based fusion of thermal and visible imagery. Computer Vision and Image Understanding, 106, 162-182(2007).
[21] Z WANG, A C BOVIK, H R SHEIKH et al. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612(2004).
[22] SHEIKH H R, BOVIK A C. A visual infmation fidelity approach to video quality assessment [C]The First International Wkshop on Video Processing Quality Metrics f Consumer Electronics: SN, 2005: 21172128.