• Optics and Precision Engineering
  • Vol. 31, Issue 17, 2584 (2023)
Jian WEN1, Jianfei SHAO1,*, Jie LIU2, Jianlong SHAO1..., Yuhang FENG1 and Rong YE1|Show fewer author(s)
Author Affiliations
  • 1Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming650500, China
  • 2Yunnan Police Unmanned System Innovation Research Institute, Yunnan Police Officer Academy, Kunming6503, China
  • show less
    DOI: 10.37188/OPE.20233117.2584 Cite this Article
    Jian WEN, Jianfei SHAO, Jie LIU, Jianlong SHAO, Yuhang FENG, Rong YE. Multidimensional attention mechanism and selective feature fusion for image super-resolution reconstruction[J]. Optics and Precision Engineering, 2023, 31(17): 2584 Copy Citation Text show less

    Abstract

    To address the problems of poor extraction of low-resolution features and blurred edges and artifacts caused by the high loss of high-frequency information in an image super-resolution reconstruction process, this paper proposes an image super-resolution reconstruction method that combines multidimensional attention and selective feature fusion (SKFF) as an image feature extraction module. The network comprises several basic blocks and residual operations to construct the feature extraction structure of the model, the core of which is a heterogeneous group convolution block for extracting image features. The symmetric group convolution block of this module performs convolution in a parallel manner to extract the internal information between different feature channels and performs selective feature fusion. The complementary convolution block captures the missed contextual information from the null domain, input–output dimension, and kernel dimension by full-dimensional dynamic convolution (ODconv). The features obtained after the symmetric group convolution and complementary convolution block processes are connected via a feature-enhanced residual block to remove useless information causing interference by redundancy. The rationality of the model design is demonstrated through five ablation experiments. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) quantitative data comparison with other mainstream super-resolution reconstruction methods on the Set5, Set14, BSDS100, and Urban100 test sets are improved, especially on the Set5 dataset with an amplification factor of 3, showing a 0.06 dB improvement over the CARN-M algorithm. The experimental results demonstrate that the proposed model has better performance indexes and visual effects.
    Jian WEN, Jianfei SHAO, Jie LIU, Jianlong SHAO, Yuhang FENG, Rong YE. Multidimensional attention mechanism and selective feature fusion for image super-resolution reconstruction[J]. Optics and Precision Engineering, 2023, 31(17): 2584
    Download Citation