• Laser & Optoelectronics Progress
  • Vol. 56, Issue 16, 161004 (2019)
Xiaoli Yang1, Suzhen Lin1、*, Xiaofei Lu2, Lifang Wang1, Dawei Li1, and Bin Wang1
Author Affiliations
  • 1 School of Big Data, North University of China, Taiyuan, Shanxi 0 30051, China
  • 2 Jiuquan Satellite Launch Center, Jiuquan, Gansu 735000, China
  • show less
    DOI: 10.3788/LOP56.161004 Cite this Article Set citation alerts
    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004 Copy Citation Text show less
    Structure of residual block
    Fig. 1. Structure of residual block
    Framework of method
    Fig. 2. Framework of method
    Network structure of generative model
    Fig. 3. Network structure of generative model
    Network structure of discriminative model
    Fig. 4. Network structure of discriminative model
    Pre-selection maps of label images. (a) Longwave infrared; (b) shortwave infrared; (c) visible light; (d) LP; (e) DWT; (f) NSCT; (g) NSST
    Fig. 5. Pre-selection maps of label images. (a) Longwave infrared; (b) shortwave infrared; (c) visible light; (d) LP; (e) DWT; (f) NSCT; (g) NSST
    Effect of learning rate on generator loss
    Fig. 6. Effect of learning rate on generator loss
    Effect of learning rate on discriminator loss
    Fig. 7. Effect of learning rate on discriminator loss
    Effect of different λ on image quality. (a) λ=0; (b) λ=0.01; (c) λ=0.1; (d) λ=1
    Fig. 8. Effect of different λ on image quality. (a) λ=0; (b) λ=0.01; (c) λ=0.1; (d) λ=1
    Effect of different λ on generator loss
    Fig. 9. Effect of different λ on generator loss
    Effect of λ on objective evaluation index of fused image. (a) The first set of fused images; (b) the second set of fused images; (c) the third set of fused images
    Fig. 10. Effect of λ on objective evaluation index of fused image. (a) The first set of fused images; (b) the second set of fused images; (c) the third set of fused images
    Image fusion results. (a) Longwave infrared; (b) shortwave infrared; (c) visible light; (d) DTCWT_SR; (e) NSST_NSCT; (f) CNN; (g) CSR; (h) proposed method
    Fig. 11. Image fusion results. (a) Longwave infrared; (b) shortwave infrared; (c) visible light; (d) DTCWT_SR; (e) NSST_NSCT; (f) CNN; (g) CSR; (h) proposed method
    LayerFilter size /stepOutput size
    Conv13×3 /1128×128×64
    Res(7 units)3×3 /1128×128×64
    3×3 /1128×128×64
    Conv93×3 /1128×128×256
    Conv103×3 /1128×128×1
    Table 1. Parameters of generator
    LayerFilter size/stepOutput size
    Conv13×3 /1128×128×64
    Conv23×3 /264×64×128
    Conv33×3 /232×32×256
    Conv43×3 /216×16×512
    Conv53×3 /116×16×256
    Conv61×1 /116×16×128
    Res1×1 /116×16×64
    3×3 /116×16×64
    3×3 /116×16×128
    Fc-1
    Table 2. Parameters of discriminator
    Fusion methodSDAGConCCIEMIVIFF
    LP49.0194.21140.8120.4247.2235.6080.466
    DWT44.7723.82636.6500.3427.1755.4150.442
    NSCT41.1234.06134.8160.4376.9535.2220.469
    NSST40.9044.14934.9020.4416.9325.2010.467
    Table 3. Label image selection table
    ImageFusion methodSDAGConCCIEMIVIFF
    No. 1DTCWT_SR35.4713.15023.9160.4096.9684.8690.505
    NSCT_NSST19.6213.16311.9110.4066.1472.8530.522
    CNN34.9282.88523.2540.4086.9614.6100.363
    CSR12.0811.9537.5700.4185.5342.6630.359
    Proposed method38.0118.12927.7930.4316.9772.5520.301
    No. 2DTCWT_SR26.2747.24525.9270.1435.9841.4920.367
    NSCT_NSST23.2687.27014.8150.3066.0261.5420.376
    CNN29.6964.98522.7540.0162.3731.3120.211
    CSR25.2715.54315.9890.3226.0322.5400.322
    Proposed method25.5274.37413.7310.3816.0572.5670.461
    No. 3DTCWT_SR21.8545.18715.0380.4276.4751.2560.415
    NSCT_NSST22.6925.37015.71660.4716.8061.5420.419
    CNN38.5904.96125.6090.4416.9102.7990.392
    CSR23.0543.86815.7260.4996.4502.3910.382
    Proposed method41.0894.31030.9290.5936.9383.0930.420
    No. 4DTCWT_SR47.4923.54523.1760.4226.3152.7730.298
    NSCT_NSST35.3733.60515.1360.4456.8312.5310.317
    CNN54.5973.16329.7780.4366.2832.8620.351
    CSR38.5872.21717.6560.4526.7963.3310.375
    Proposed method32.4008.45222.7670.4576.9382.0550.384
    No. 5DTCWT_SR40.0065.05832.5540.0257.3082.9760.407
    NSCT_NSST24.8865.14718.0590.4896.6151.8120.428
    CNN40.5593.95134.3310.2717.3162.1330.346
    CSR26.2392.52620.7070.5156.6423.0070.277
    Proposed method40.6775.97433.8900.5697.3212.1060.473
    No. 6DTCWT_SR54.4583.03844.0200.0427.7442.9860.524
    NSCT_NSST25.9903.04718.7280.4556.6862.2540.552
    CNN45.5492.53835.2690.1937.3331.8390.208
    CSR28.8031.52321.1250.4296.6912.8580.160
    Proposed method46.4255.64336.4620.4767.4532.9930.648
    No. 7DTCWT_SR54.8483.55536.9840.0447.4203.4850.484
    NSCT_NSST27.5273.55021.4900.4606.8131.8070.504
    CNN45.8802.94537.1290.3347.1042.6270.440
    CSR27.8531.83421.1680.4726.7012.9680.326
    Proposed method47.0244.25937.5230.5007.4872.4610.606
    No. 8DTCWT_SR55.8902.94047.0620.7714.3573.6120.169
    NSCT_NSST52.6104.87247.6470.8684.8323.9060.380
    CNN87.2366.20873.9070.7215.6883.9330.294
    CSR54.2084.75347.2140.7944.2453.3830.285
    Proposed method92.15215.27976.8700.9434.3334.9470.505
    Table 4. Comparison of evaluation index of fusion results
    Xiaoli Yang, Suzhen Lin, Xiaofei Lu, Lifang Wang, Dawei Li, Bin Wang. Multimodal Image Fusion Based on Generative Adversarial Networks[J]. Laser & Optoelectronics Progress, 2019, 56(16): 161004
    Download Citation