• Laser & Optoelectronics Progress
  • Vol. 56, Issue 16, 161004 (2019)
Xiaoli Yang1, Suzhen Lin1、*, Xiaofei Lu2, Lifang Wang1, Dawei Li1, and Bin Wang1
Author Affiliations
  • 1 School of Big Data, North University of China, Taiyuan, Shanxi 0 30051, China
  • 2 Jiuquan Satellite Launch Center, Jiuquan, Gansu 735000, China
  • show less
    DOI: 10.3788/LOP56.161004 Cite this Article Set citation alerts
    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004 Copy Citation Text show less
    References

    [1] Ma J Y, Ma Y, Li C. Infrared and visible image fusion methods and applications:a survey[J]. Information Fusion, 45, 153-178(2019).

    [2] Ranchin T, Wald L. The wavelet transform for the analysis of remotely sensed images[J]. International Journal of Remote Sensing, 14, 615-619(1993).

    [3] Kingsbury N. A dual-tree complex wavelet transform with improved orthogonality and symmetry properties. [C]∥Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101), September 10-13, 2000, Vancouver, BC, Canada. New York: IEEE, 375-378(2000).

    [4] Liu Y, Liu S P, Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 24, 147-164(2015).

    [5] Yi W, Zeng Y, Yuan Z. Fusion of GF-3 SAR and optical images based on the nonsubsampled contourlet transform[J]. Acta Optica Sinica, 38, 1110002(2018).

    [6] Hu J W, Li S T. The multiscale directional bilateral filter and its application to multisensor image fusion[J]. Information Fusion, 13, 196-206(2012).

    [7] Ding W S, Bi D Y, He L Y et al. Fusion of infrared and visible images based on shearlet transform and neighborhood structure features[J]. Acta Optica Sinica, 37, 1010002(2017).

    [8] Zhang Q, Liu Y, Blum R S et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review[J]. Information Fusion, 40, 57-75(2018).

    [9] Zhu D R, Xu L, Wang F B et al. Multi-focus image fusion algorithm based on fast finite shearlet transform and guided filter[J]. Laser & Optoelectronics Progress, 55, 011001(2018).

    [10] Liu Y, Chen X, Wang Z F et al. Deep learning for pixel-level image fusion: recent advances and future prospects[J]. Information Fusion, 42, 158-173(2018).

    [11] Liu Y, Chen X, Peng H et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 36, 191-207(2017).

    [12] Lin S Z, Han Z. Images fusion based on deep stack convolutional neural network[J]. Chinese Journal of Computers, 40, 2506-2518(2017).

    [13] Li H, Wu X J, Kittler J. Infrared and visible image fusion using a deep learning framework. [C]∥2018 24th International Conference on Pattern Recognition (ICPR), August 20-24, 2018, Beijing, China. New York: IEEE, 2705-2710(2018).

    [14] Liu Y, Chen X, Ward R K et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 23, 1882-1886(2016).

    [15] He K M, Zhang X Y, Ren S Q et al. Deep residual learning for image recognition. [C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA. New York: IEEE, 770-778(2016).

    [16] Jiang X H, Pang Y W, Li X L et al. Deep neural networks with elastic rectified linear units for object recognition[J]. Neurocomputing, 275, 1132-1139(2018).

    [17] Cai J R, Gu S H, Zhang L. Learning a deep single image contrast enhancer from multi-exposure images[J]. IEEE Transactions on Image Processing, 27, 2049-2062(2018).

    [18] Zhang H, Dana K. Multi-style generative network for real-time transfer[M]. ∥Leal-Taixé L, Roth S.Computer Vision-ECCV 2018 Workshops. Lecture notes in computer science. Cham: Springer, 11132, 349-365(2019).

    [19] Goodfellow I J, Pouget-Abadie J, Mirza M et al. Generative adversarial nets[C]∥Proceeding NIPS'14 Proceedings of the 27th International Conference on Neural Information Processing Systems, December 08-13, 2014, Montreal, Canada., 2, 2672-2680(2014).

    [20] Radford A, Metz L. -01-07)[2018-12-25]. https:∥arxiv., org/abs/1511, 06434(2016).

    [21] Mao X D, Li Q, Xie H R et al. Least squares generative adversarial networks. [C]∥2017 IEEE International Conference on Computer Vision (ICCV), October 22-29, 2017, Venice, Italy. New York: IEEE, 2813-2821(2017).

    [22] Ledig C, Theis L, Huszár F et al. Photo-realistic single image super-resolution using a generative adversarial network. [C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu,HI, USA. New York: IEEE, 105-114(2017).

    [23] Li Y, Wang N, Shi J et al. Adaptive Batch Normalization for practical domain adaptation[J]. Pattern Recognition, 80, 109-117(2016).

    [24] Wang S H, Phillips P, Sui Y X et al. Classification of alzheimer's disease based on eight-layer convolutional neural network with leaky rectified linear unit and max pooling[J]. Journal of Medical Systems, 42, 85(2018).

    [25] Huang R, Zhang S, Li T Y et al. Beyond face rotation: global and local perception GAN for photorealistic and identity preserving frontal view synthesis. [C]∥2017 IEEE International Conference on Computer Vision (ICCV), October 22-29, 2017, Venice, Italy. New York: IEEE, 2458-2467(2017).

    [26] Shi J G, Liu X, Zong Y et al. Hallucinating face image by regularization models in high-resolution feature space[J]. IEEE Transactions on Image Processing, 27, 2980-2995(2018).

    [27] Toet A. The TNO multiband image data collection[J]. Data in Brief, 15, 249-251(2017).

    [28] Huang F S. Comparision of multiscale transform fusion methods for multiband image[D]. Taiyuan: The North University of China(2018).

    [29] Jagalingam P, Hegde A V. A review of quality metrics for fused image[J]. Aquatic Procedia, 4, 133-142(2015).

    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004
    Download Citation