• Laser & Optoelectronics Progress
  • Vol. 57, Issue 20, 201007 (2020)
Yu Shen, Xiaopeng Chen*, Yubin Yuan, Lin Wang, and Hongguo Zhang
Author Affiliations
  • School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou, Gansu 730070, China
  • show less
    DOI: 10.3788/LOP57.201007 Cite this Article Set citation alerts
    Yu Shen, Xiaopeng Chen, Yubin Yuan, Lin Wang, Hongguo Zhang. Infrared and Visible Image Fusion Based on Significant Matrix and Neural Network[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201007 Copy Citation Text show less

    Abstract

    In view of the serious detail loss and poor visual effect in the process of infrared and visible image fusion, a fusion method based on the multi-scale geometric transformation model is proposed. First, the improved visual saliency detection algorithm is used to detect the significance of infrared and visible images and construct the saliency matrix. Then, the infrared and visible images are transformed by the non-subsampled shearlet transform to obtain the corresponding low-frequency and high-frequency subbands. Simultaneously, the low-frequency subbands are adaptively weighted by the saliency matrix and the high-frequency subbands are fused by the simplified pulse coupled neural network combined with the multi-direction sum-modified-Laplacian. Finally, the fusion image is obtained by inverse transformation. The experimental results show that this method can effectively improve the contrast of the fusion image and retain the details of the source image. The fusion image has a good visual effect and performs well in a variety of objective evaluation indicators.
    Yu Shen, Xiaopeng Chen, Yubin Yuan, Lin Wang, Hongguo Zhang. Infrared and Visible Image Fusion Based on Significant Matrix and Neural Network[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201007
    Download Citation