Author Affiliations
1School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, Jiangsu, China2Institute of Advanced Technology, Jiangnan University, Wuxi 214122, Jiangsu, China3Jiangsu Laboratory of Pattern Recognition and Computational Intelligence, Wuxi 214122, Jiangsu, China4School of Electronics and Information Engineering, Suzhou University of Science and Technology, Suzhou 215000, Jiangsu, Chinashow less
Fig. 1. Overall framework for fusion network
Fig. 2. Encoder construction. (a) Mutual information encoder; (b) general convolutional encoder
Fig. 3. Hierarchical feature visualisation. (a1) IR; (a2) VIS; (b1) ; (b2) ; (c1) ; (c2) ; (d1) ; (d2) ; (e1) ; (e2)
Fig. 4. HAFF module structure
Fig. 5. Visualisation of HAFF module features and parameters. (a) IR; (b) VIS; (c) ; (d) ; (e) ; (f) ; (g) ; (h) ; (i)
Fig. 6. Experimental results comparing the TNO dataset. (a) IR; (b) VIS; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet;(g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
Fig. 7. Experimental results comparing the RoadScene dataset. (a) IR; (b) VIS; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet; (g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
Fig. 8. Fusion results of different weight values for a pair of images in the TNO dataset. (a) IR; (b) VIS; (c) w=0; (d) w=0.1; (e) w=0.2; (f) w=0.3; (g) w=0.4; (h) w=0.5; (i) w=0.6; (j) w=0.7; (k) w=0.8; (l) w=0.9; (m) w=1.0; (n) ours
Fig. 9. Fusion results of each method in MRI-CT medical images. (a) MR-T1; (b) CT; (c) GTF; (d) GANMcC; (e) GAN-FM; (f) SDNet; (g) Densefuse; (h) DRF; (i) IFSepR; (j) ours
Method | SD | EN | DF | EI | AG | SF | Qp | MI |
---|
GTF | 39.3766 | 6.7700 | 3.5374 | 27.5437 | 2.7666 | 7.0316 | 0.1954 | 13.5402 | GANMcC | 30.4876 | 6.2167 | 2.2788 | 20.1246 | 1.9334 | 4.6448 | 0.1363 | 12.4336 | GAN-FM | 28.6088 | 6.5363 | 3.5750 | 27.3276 | 2.7304 | 6.9313 | 0.1945 | 13.0727 | SDNet | 33.0535 | 6.7042 | 4.7285 | 39.9012 | 3.9359 | 9.3848 | 0.2762 | 13.4086 | Densefuse | 35.6407 | 6.8817 | 3.6372 | 30.7780 | 3.0116 | 7.1147 | 0.3039 | 13.7635 | DRF | 9.7089 | 5.0372 | 0.7828 | 7.8756 | 0.7220 | 1.6082 | 0.1127 | 10.0745 | IFSepR | 26.9453 | 6.4900 | 3.3446 | 28.0275 | 2.7778 | 7.1973 | 0.3295 | 12.9799 | Ours | 43.6229 | 7.1650 | 5.7459 | 53.2266 | 5.0586 | 10.7925 | 0.4995 | 14.3300 |
|
Table 1. Objective evaluation metrics for each method on the TNO dataset 43 pairs of images
Method | SD | EN | DF | EI | AG | SF | Qp | MI |
---|
GTF | 53.0565 | 7.5013 | 3.9626 | 35.3473 | 3.3552 | 9.4492 | 0.2321 | 15.0027 | GANMcC | 38.6292 | 6.8719 | 3.8104 | 35.6541 | 3.3365 | 8.0408 | 0.1863 | 13.7437 | GAN-FM | 38.3201 | 7.0326 | 4.8691 | 41.5556 | 3.9687 | 10.4286 | 0.2467 | 14.0652 | SDNet | 44.9798 | 7.3160 | 7.1417 | 64.0914 | 6.0926 | 15.1815 | 0.3998 | 14.6320 | Densefuse | 42.3739 | 7.1708 | 5.3075 | 46.3451 | 4.4202 | 11.2749 | 0.3938 | 14.3417 | DRF | 17.4962 | 5.8409 | 1.3293 | 13.3512 | 1.2238 | 2.7764 | 0.0835 | 11.6818 | IFSepR | 33.4337 | 6.8843 | 5.3954 | 44.7533 | 4.4083 | 13.2158 | 0.3513 | 13.7686 | Ours | 55.2487 | 7.6632 | 10.8311 | 98.7832 | 9.3087 | 21.3638 | 0.5034 | 15.3265 |
|
Table 2. Objective evaluation metrics for each method on the RoadScene dataset 221 pairs of images
Fusion | SD | EN | DF | EI | AG | SF | Qp | MI |
---|
Addition | 39.1200 | 7.0708 | 5.2884 | 48.9269 | 4.6386 | 9.8304 | 0.4166 | 14.1415 | Multiplication | 37.8513 | 7.0431 | 5.1052 | 47.2807 | 4.4835 | 9.3957 | 0.3856 | 14.0862 | Concation | 42.3549 | 7.1600 | 4.8912 | 46.2831 | 4.3370 | 9.2844 | 0.2956 | 14.3200 | ASFF | 42.8398 | 7.1284 | 5.5951 | 51.0774 | 5.0215 | 10.0065 | 0.3295 | 14.2568 | Ours | 43.6229 | 7.1650 | 5.7459 | 53.2266 | 5.0586 | 10.7925 | 0.4995 | 14.3300 |
|
Table 3. Objective evaluation metrics for different fusion methods on 43 pairs of images from the TNO dataset
Method | SD | EN | DF | EI | AG | SF | Qp | MI |
---|
| 44.7291 | 7.2098 | 5.2815 | 50.0910 | 4.6891 | 10.0734 | 0.3338 | 14.3197 | | 41.6797 | 7.1281 | 4.7107 | 44.7071 | 4.1901 | 9.0897 | 0.2809 | 14.2563 | | 36.9613 | 7.0059 | 4.7370 | 43.4540 | 4.1182 | 8.9765 | 0.2711 | 14.0118 | | 37.3071 | 7.0519 | 4.6598 | 43.6528 | 4.0920 | 8.8187 | 0.2121 | 14.1038 | | 35.3388 | 6.9910 | 4.7573 | 43.8456 | 4.1299 | 8.9738 | 0.2201 | 13.9821 | | 34.6621 | 6.9800 | 4.9332 | 44.9583 | 4.2583 | 9.2221 | 0.2058 | 13.9599 | | 31.2020 | 6.8659 | 4.9018 | 43.3784 | 4.1468 | 9.0474 | 0.1835 | 13.7318 | | 33.3493 | 6.9473 | 5.0526 | 45.3067 | 4.3051 | 9.3688 | 0.2018 | 13.8947 | | 35.1397 | 7.0141 | 4.8359 | 43.7822 | 4.1602 | 8.9426 | 0.1962 | 14.0282 | | 31.4273 | 6.8663 | 5.1649 | 45.8445 | 4.3726 | 9.5681 | 0.1898 | 13.7325 | | 34.2250 | 6.9726 | 4.9425 | 43.7664 | 4.1684 | 9.1598 | 0.1991 | 13.9451 | Ours | 43.6229 | 7.1650 | 5.7459 | 53.2266 | 5.0586 | 10.7925 | 0.4995 | 14.3300 |
|
Table 4. Objective evaluation metrics for different weighting values on 43 pairs of images from the TNO dataset
| SD | EN | DF | EI | AG | SF | Qp | MI |
---|
| 39.8836 | 6.8782 | 4.9477 | 50.3209 | 4.1579 | 9.1012 | 0.3946 | 13.7563 | | 40.9696 | 6.9291 | 5.0301 | 52.4783 | 4.2433 | 9.3433 | 0.3980 | 13.8582 | | 42.6061 | 6.9938 | 4.9322 | 51.9882 | 4.1768 | 10.1270 | 0.4042 | 13.9877 | | 41.2148 | 7.0141 | 5.0890 | 52.2599 | 4.3055 | 9.4051 | 0.4031 | 14.0281 | | 41.8657 | 7.0044 | 5.1368 | 53.0034 | 4.3732 | 9.4556 | 0.4057 | 14.0088 | | 42.7849 | 6.9210 | 5.0168 | 50.3367 | 4.2424 | 10.2695 | 0.3902 | 13.8420 | | 43.6229 | 7.1650 | 5.7459 | 53.2266 | 5.0586 | 10.7925 | 0.4995 | 14.3300 | | 42.6738 | 6.9603 | 5.0696 | 49.1183 | 4.2991 | 10.3194 | 0.4150 | 13.9206 | | 40.5754 | 6.9986 | 5.4492 | 47.2277 | 4.5437 | 9.9541 | 0.4081 | 13.9972 | | 39.1473 | 7.0221 | 4.9707 | 49.8322 | 4.2551 | 9.2919 | 0.4082 | 14.0441 |
|
Table 5. Objective evaluation indicators for different balance parameter values on the TNO dataset 43 pairs of images
Method | SD | EN | DF | EI | AG | SF | Qp | MI |
---|
GTF | 60.6857 | 4.4535 | 6.6577 | 57.8738 | 5.6328 | 21.7096 | 0.0264 | 8.9071 | GANMcC | 48.0953 | 4.5493 | 3.9745 | 37.3691 | 3.5264 | 11.1650 | 0.0095 | 9.0986 | GAN-FM | 58.4541 | 5.8833 | 5.0845 | 43.8918 | 4.2130 | 18.6376 | 0.0186 | 11.7667 | SDNet | 62.4863 | 5.1234 | 7.9727 | 70.8844 | 6.8823 | 23.5017 | 0.0308 | 10.2468 | Densefuse | 63.0141 | 4.4407 | 5.4302 | 47.4444 | 4.5968 | 17.6804 | 0.0295 | 8.8815 | DRF | 23.5208 | 4.4353 | 0.9759 | 10.0041 | 0.9146 | 2.6619 | 0.0213 | 8.8707 | IFSepR | 54.7893 | 5.2338 | 8.7076 | 89.1554 | 7.0692 | 40.3196 | 0.0113 | 10.4677 | Ours | 63.3236 | 6.6326 | 8.7420 | 74.8356 | 7.3261 | 23.1368 | 0.0277 | 13.2653 |
|
Table 6. Objective evaluation index of each method on 15 pairs of MRI-CT medical images
Parameter | GANMcC | GAN-FM | SDNet | DenseFuse | DRF | IFSepR | Ours |
---|
Parameter number /106 | 8.680 | 59.014 | 0.256 | 4.245 | 183.636s | 1.960 | 8.893 | FPS /(frame·s-1) | 1.210 | 14.285 | 26.316 | 38.610 | 1.112 | 0.806 | 217 |
|
Table 7. Comparison of the efficiency of models based on deep learning methods