• Laser & Optoelectronics Progress
  • Vol. 56, Issue 16, 161004 (2019)
Xiaoli Yang1, Suzhen Lin1、*, Xiaofei Lu2, Lifang Wang1, Dawei Li1, and Bin Wang1
Author Affiliations
  • 1 School of Big Data, North University of China, Taiyuan, Shanxi 0 30051, China
  • 2 Jiuquan Satellite Launch Center, Jiuquan, Gansu 735000, China
  • show less
    DOI: 10.3788/LOP56.161004 Cite this Article Set citation alerts
    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004 Copy Citation Text show less

    Abstract

    This study proposes a new network based on generative adversarial networks to achieve an end-to-end image adaptive fusion, thus solving the difficulties in designing multiscale geometric tools and fusion rules in multimodal image fusion. First, the multimodal source image is synchronously input into the generative network, whose structure is created based on a residual-based convolutional neural network proposed herein. The network can generate the fused image through adaptive learning. Second, the fused and label images are sent to the discriminant network. The generator is gradually optimized through the feature representation and classification identification of the discriminator. The final fused image is obtained in the dynamic balance of the generator and discriminator. In comparison with the existing representative fusion methods, the proposed algorithm obtains more cleaner fusion results and has no artifacts, thereby providing a better visual quality.
    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004
    Download Citation