Fig. 1. Architecture of the proposed EAGAN. CA Block:channel attention block,SA Block:spatial attention block,BN:batch normalization,FC:fully connected layer,Conv:corresponding kernel size(k),number of feature maps(n)and stride(s)indicated for each convolutional layer
Fig. 2. Architecture of Attention Block. GAP:Global Average Pooling,GMP:Global Max Pooling,r:scaling factor,Conv:corresponding kernel size(k),number of feature maps(n)and stride(s)indicated for each convolutional layer
Fig. 3. Qualitative comparison of different algorithms on 4 typical infrared and visible image pairs. From top to bottom:visible image,infrared image,fusion results of ASR,GFF,GTF,DenseFuse,FusionGAN,RCGAN and our algorithm.
Fig. 4. Qualitative comparison of different algorithms on 5 typical infrared and visible image pairs from TNO dataset. From left to right:Duine sequence,Nato_camp_sequence,Kaptein_1123,men in front of house and soldier_behind_smoke_3. From top to bottom:visible image,infrared image,fusion results of ASR,GFF,GTF,DenseFuse,FusionGAN,RCGAN and our algorithm.
Fig. 5. Qualitative comparison of different algorithms on 4 typical infrared and visible image pairs from INO dataset. From left to right:ParkingSnow,GroupFight,MultipleDeposit,ClosePerson. From top to bottom:visible image,infrared image,fusion results of ASR,GFF,GTF,DenseFuse,FusionGAN,RCGAN and our algorithm.
Fig. 6. Attention weight maps:(a)the infrared image;(b)the visible image;(c)the fused result of our proposed EAGAN;(d)Output result of the third attention block;(e)Channel Attention weight map;(f)Spatial Attention weight map
Fig. 7. The effect of attention mechanism on fusion results:(a)fusion result of the network without attention mechanism;(b)fusion result of our algorithm.
Fig. 8. Fusion results when the loss function of the generator changes:(a);(b);(c);(d);(e);(f);(g)result of EAGAN.
| ASR | GFF | GTF | DenseFuse | FusionGAN | RCGAN | Ours |
---|
EN | 6.92 | 7.26 | 7.26 | 7.24 | 6.84 | 7.14 | 7.30 | SCD | 1.27 | 1.23 | 0.98 | 1.71 | 0.86 | 1.19 | 1.54 | SF | 13.11 | 13.60 | 9.12 | 11.97 | 9.35 | 9.50 | 15.62 | EI | 0.22 | 0.22 | 0.16 | 0.20 | 0.16 | 0.18 | 0.27 |
|
Table 1. Quantitative comparison of different algorithms on RoadScene dataset
| ASR | GFF | GTF | DenseFuse | FusionGAN | RCGAN | Ours |
---|
EN | 6.44 | 6.84 | 6.93 | 6.87 | 6.35 | 6.77 | 7.08 | SCD | 1.61 | 1.36 | 0.97 | 1.79 | 1.30 | 1.41 | 1.67 | SF | 8.93 | 9.55 | 8.31 | 8.54 | 6.59 | 7.41 | 11.59 | EI | 0.13 | 0.14 | 0.13 | 0.14 | 0.11 | 0.13 | 0.19 |
|
Table 2. Quantitative comparison of different algorithms on TNO dataset
| ASR | GFF | GTF | DenseFuse | FusionGAN | RCGAN | Ours |
---|
EN | 6.94 | 7.14 | 7.02 | 7.09 | 6.62 | 6.97 | 7.23 | SCD | 1.40 | 1.29 | 1.03 | 1.69 | 1.02 | 1.18 | 1.53 | SF | 16.80 | 17.33 | 14.72 | 14.34 | 12.71 | 13.12 | 19.40 | EI | 0.25 | 0.26 | 0.21 | 0.22 | 0.19 | 0.21 | 0.30 |
|
Table 3. Quantitative comparison of different algorithms on INO dataset
| | EN | SCD | SF | EI |
---|
RoadScene | 无注意力机制方法 | 7.26 | 1.52 | 16.02 | 0.27 | 本文方法 | 7.30 | 1.54 | 15.62 | 0.27 | TNO | 无注意力机制方法 | 6.93 | 1.63 | 11.62 | 0.19 | 本文方法 | 7.08 | 1.67 | 11.59 | 0.19 |
|
Table 4. Comparison of effects of attention mechanism on fusion results