• Optics and Precision Engineering
  • Vol. 32, Issue 2, 221 (2024)
Tao ZHOU1,3, Qianru CHENG1,3,*, Xiangxiang ZHANG1,3, Qi LI1,3, and Huiling LU2
Author Affiliations
  • 1School of Computer Science and Engineering, North Minzu University, Yinchuan75002, China
  • 2School of Medical Information and Engineering, Ningxia Medical University, Yinchuan750004, China
  • 3Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan750021, China
  • show less
    DOI: 10.37188/OPE.20243202.0221 Cite this Article
    Tao ZHOU, Qianru CHENG, Xiangxiang ZHANG, Qi LI, Huiling LU. PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN[J]. Optics and Precision Engineering, 2024, 32(2): 221 Copy Citation Text show less

    Abstract

    Medical image fusion based on Generative Adversarial Network (GAN) is one of the research hotspots in the field of computer-aided diagnosis. However, the problems of GAN-based image fusion methods such as unstable training, insufficient ability to extract local and global contextual semantic information of the images, and insufficient interactive fusion. To solve these problems, this paper proposed a dual-coupled interactive fusion GAN (DCIF-GAN). Firstly, a dual generator and dual discriminator GAN was designed, the coupling between generators and the coupling between discriminators was realized through the weight sharing mechanism, and the interactive fusion was realized through the global self-attention mechanism; secondly, coupled CNN-Transformer feature extraction module and feature reconstruction module were designed, which improved the ability to extract local and global feature information inside the same modal image; thirdly, a cross modal interactive fusion module (CMIFM) was designed, which interactively fuse image feature information of different modalities. In order to verify the effectiveness of the proposed model, the experiment was carried out on the lung tumor PET/CT medical image dataset. Compared with the best method of the other four methods, the proposed method in the average gradient, spatial frequency, structural similarity, standard deviation, peak signal-to-noise ratio, and information entropy are improved by 1.38%, 0.39%, 29.05%, 30.23%, 0.18%, 4.63% respectively. The model can highlight the information of the lesion areas, and the fused image has clear structure and rich texture details.
    Tao ZHOU, Qianru CHENG, Xiangxiang ZHANG, Qi LI, Huiling LU. PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN[J]. Optics and Precision Engineering, 2024, 32(2): 221
    Download Citation