• Spectroscopy and Spectral Analysis
  • Vol. 43, Issue 2, 590 (2023)
FENG Xin1、2, FANG Chao1, GONG Hai-feng2, LOU Xi-cheng1, and PENG Ye1
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.3964/j.issn.1000-0593(2023)02-0590-07 Cite this Article
    FENG Xin, FANG Chao, GONG Hai-feng, LOU Xi-cheng, PENG Ye. Infrared and Visible Image Fusion Based on Two-Scale Decomposition and Saliency Extraction[J]. Spectroscopy and Spectral Analysis, 2023, 43(2): 590 Copy Citation Text show less

    Abstract

    To enhance the visibility of infrared and visible image fusion and overcome the problems of detail loss, insignificant target, and low contrast in infrared and visible image fusion results, a novel infrared and visible image fusion method based on two-scale decomposition and saliency extraction is proposed. Firstly, based on the theory of human visual perception, the source image is decomposed at different levels to avoid mixing high-frequency and low-frequency components to reduce the halo effect. In this paper, we use a two-scale decomposition method to decompose the source infrared and visible images and obtain the basic layer and detail layer, respectively, representing the image well and having good real-time performance. Then, a weighted average fusion rule based on a visual saliency map (VSM) is proposed to fuse basic layers, and the VSM method can extract the salient structures and targets in the source images. The VSM-based weighted average fusion rule is used to fuse the base layer, effectively avoiding the contrast loss caused by the direct use of the weighted average strategy and making the fused image perform better. The Kirsch operator is used to extract the source images separately to obtain the salient maps for the fusion of the detail layer. Then the VGG-19 network is applied to get the weight maps by extracting features from the salient maps and fusing them with the detail layer to obtain the fused detail layer. The Kirsch operator can quickly extract the image edges in eight directions, and the significant map will contain more edge information and less noise. The VGG-19 network can extract deeper feature information from the image, and the obtained weight map will have more helpful information. Finally, the fused basic and detail layer images are superimposed to get the final fusion result. Four sets of typical infrared and visible images are selected for testing and compared with six other current mainstream methods in the experimental part. The experimental results show that the method in this paper has the advantages of high contrast, prominent target, rich detail information and better retention of image edge features in terms of subjective quality. The objective metrics such as information entropy, mutual information, standard deviation, multiscale structural similarity measure and difference correlation sum also show relatively good results.
    FENG Xin, FANG Chao, GONG Hai-feng, LOU Xi-cheng, PENG Ye. Infrared and Visible Image Fusion Based on Two-Scale Decomposition and Saliency Extraction[J]. Spectroscopy and Spectral Analysis, 2023, 43(2): 590
    Download Citation