• Laser & Optoelectronics Progress
  • Vol. 59, Issue 6, 0617029 (2022)
Wanxin Xiao1、2, Huafeng Li1、2, Yafei Zhang1、2、*, Minghong Xie1, and Fan Li1、2
Author Affiliations
  • 1Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming , Yunnan 650500, China
  • 2Yunnan Key Laboratory of Artificial Intelligence, Kunming , Yunnan 650500, China
  • show less
    DOI: 10.3788/LOP202259.0617029 Cite this Article Set citation alerts
    Wanxin Xiao, Huafeng Li, Yafei Zhang, Minghong Xie, Fan Li. Medical Image Fusion Based on Multi-Scale Feature Learning and Edge Enhancement[J]. Laser & Optoelectronics Progress, 2022, 59(6): 0617029 Copy Citation Text show less
    References

    [1] Yin M, Liu X N, Liu Y et al. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain[J]. IEEE Transactions on Instrumentation and Measurement, 68, 49-64(2019).

    [2] Wang J X, Chen S, Xie M H. Multi-source image fusion based on low-rank decomposition and convolutional sparse coding[J]. Laser & Optoelectronics Progress, 58, 2210009(2021).

    [3] Li W, Li Z M. NSST-based perception fusion method for infrared and visible images[J]. Laser & Optoelectronics Progress, 58, 2010014(2021).

    [4] Yang Y. Multimodal medical image fusion through a new DWT based technique[C], 11495757(2010).

    [5] Du J, Li W S, Lu K et al. An overview of multi-modal medical image fusion[J]. Neurocomputing, 215, 3-20(2016).

    [6] Zhao H, Zhang J X, Zhang Z G. PCNN medical image fusion based on NSCT and DWT[J]. Laser & Optoelectronics Progress, 58, 2017002(2021).

    [7] Zhu Z Q, Zheng M Y, Qi G Q et al. A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain[J]. IEEE Access, 7, 20811-20824(2019).

    [8] Li H F, Liu X K, Yu Z T et al. Performance improvement scheme of multifocus image fusion derived by difference images[J]. Signal Processing, 128, 474-493(2016).

    [9] Li H F, Wang Y T, Yang Z et al. Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 69, 1082-1102(2020).

    [10] Zhu Z Q, Yin H P, Chai Y et al. A novel multi-modality image fusion method based on image decomposition and sparse representation[J]. Information Sciences, 432, 516-529(2018).

    [11] Liu Y, Chen X, Wang Z F et al. Deep learning for pixel-level image fusion: recent advances and future prospects[J]. Information Fusion, 42, 158-173(2018).

    [12] Ma J Y, Yu W, Liang P W et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 48, 11-26(2019).

    [13] Jung H, Kim Y, Jang H et al. Unsupervised deep image fusion with structure tensor representations[J]. IEEE Transactions on Image Processing, 29, 3845-3858(2020).

    [14] Yuan X C, Pun C M, Chen C L P. Robust Mel-frequency cepstral coefficients feature detection and dual-tree complex wavelet transform for digital audio watermarking[J]. Information Sciences, 298, 159-179(2015).

    [15] da Cunha A L, Zhou J, Do M N. The nonsubsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 15, 3089-3101(2006).

    [16] Liu Y, Liu S P, Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 24, 147-164(2015).

    [17] Zhang Y F, Yang M Y, Li N et al. Analysis-synthesis dictionary pair learning and patch saliency measure for image fusion[J]. Signal Processing, 167, 107327(2020).

    [18] Liu Y, Chen X, Peng H et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 36, 191-207(2017).

    [19] Zhong J Y, Yang B, Li Y H et al. Image fusion and super-resolution with convolutional neural network[M]. Tan T, Li X L, Chen X L, et al. Pattern recognition, 663, 78-88(2016).

    [20] Liu Y, Chen X, Ward R K et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 23, 1882-1886(2016).

    [21] Masi G, Cozzolino D, Verdoliva L et al. Pansharpening by convolutional neural networks[J]. Remote Sensing, 8, 594(2016).

    [22] Lahoud F, Süsstrunk S. Fast and efficient zero-learning image fusion[EB/OL]. https://arxiv.org/abs/1905.03590

    [23] Zhang Y, Liu Y, Sun P et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 54, 99-118(2020).

    [24] Jung H, Kim Y, Jang H et al. Unsupervised deep image fusion with structure tensor representations[J]. IEEE Transactions on Image Processing, 29, 3845-3858(2020).

    [25] Hou R C, Zhou D M, Nie R C et al. Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model[J]. Medical & Biological Engineering & Computing, 57, 887-900(2019).

    [26] Fan F D, Huang Y Y, Wang L et al. A semantic-based medical image fusion approach[EB/OL]. https://arxiv.org/abs/1906.00225

    [27] Liang X C, Hu P Y, Zhang L G et al. MCFNet: multi-layer concatenation fusion network for medical images fusion[J]. IEEE Sensors Journal, 19, 7107-7119(2019).

    [28] Zhang B, Jiang C, Hu Y X et al. Medical image fusion based a densely connected convolutional networks[C], 2164-2170(2021).

    [29] Haghighat M, Razian M A. Fast-FMI: non-reference image fusion metric[C], 14916890(2014).

    [30] Aslantas V, Bendes E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU-International Journal of Electronics and Communications, 69, 1890-1896(2015).

    [31] Wang Z, Bovik A C, Sheikh H R et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 13, 600-612(2004).

    [32] Yang C, Zhang J Q, Wang X R et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 9, 156-160(2008).

    [33] Li S T, Kang X D, Hu J W. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 22, 2864-2875(2013).

    [34] Jian L H, Yang X M, Liu Z et al. SEDRFuse: a symmetric encoder-decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 70, 1-15(2021).

    [35] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C], 4724-4732(2017).

    [36] Liu H X, Zhu T H, Zhao J J. Infrared and visible image fusion based on region of interest detection and nonsubsampled contourlet transform[J]. Journal of Shanghai Jiaotong University (Science), 18, 526-534(2013).

    [37] Yang B, Yang C, Huang G Y. Efficient image fusion with approximate sparse representation[J]. International Journal of Wavelets, Multiresolution and Information Processing, 14, 1650024(2016).

    Wanxin Xiao, Huafeng Li, Yafei Zhang, Minghong Xie, Fan Li. Medical Image Fusion Based on Multi-Scale Feature Learning and Edge Enhancement[J]. Laser & Optoelectronics Progress, 2022, 59(6): 0617029
    Download Citation