• Laser Journal
  • Vol. 45, Issue 4, 121 (2024)
WANG Qingqing1,2, XIN Yuelan1,2,*, SHENG Yue1,2, and XIE Qiqi1,2
Author Affiliations
  • 1School of Computer Science, Qinghai Normal University, Xining 810001, China
  • 2State Key Laboratory of Intelligent Information Processing and Applications in Tibetan, Xining 810001, China
  • show less
    DOI: 10.14016/j.cnki.jgzz.2024.04.121 Cite this Article
    WANG Qingqing, XIN Yuelan, SHENG Yue, XIE Qiqi. Super-resolution reconstruction of images based on multi-scale feature aggregation[J]. Laser Journal, 2024, 45(4): 121 Copy Citation Text show less

    Abstract

    To address the problems of single extracted feature information and missing image details in the image super-resolution reconstruction process, this paper proposes a new generative adversarial network (DAMFA-GAN) to obtain more realistic and natural reconstructed images. In terms of generator, a Dynamic attention-Multi-scale feature aggregation (DAMFA) incorporating a dynamic attention mechanism is used to obtain multi-scale high-frequency information of each upsampled feature in low-resolution images to improve the quality of the reconstructed images; in terms of discriminator, the ConvTrans Encoder module is designed to enhance the feature information extraction capability to improve the accuracy of discrimination. Experimental results on the Set5, Set14, BSD100 and Urban100 datasets showed that DAMFA-GAN improved the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) by an average of 0.50 dB and 0.015 2 respectively compared to SRGAN. At the same time, the high-frequency details and visual effects of super-resolution reconstructed images are also significantly improved.
    WANG Qingqing, XIN Yuelan, SHENG Yue, XIE Qiqi. Super-resolution reconstruction of images based on multi-scale feature aggregation[J]. Laser Journal, 2024, 45(4): 121
    Download Citation