• Acta Photonica Sinica
  • Vol. 48, Issue 2, 210001 (2019)
FENG Xin*
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.3788/gzxb20194802.0210001 Cite this Article
    FENG Xin. Fusion of Infrared and Visible Images Based on Tetrolet Framework[J]. Acta Photonica Sinica, 2019, 48(2): 210001 Copy Citation Text show less

    Abstract

    An fusion method based on joint sparse representation and improved pulse coupled neural network in tetrolet framework was proposed. The original infrared and visible images were decomposed without considering the rotation and reflection; for the low-frequency sub-band coefficients, the joint sparse representation method was used to accurately fit and fuse the low-frequency coefficients through the learning dictionary. In the high frequency subband coefficient fusion, the corresponding fusion rule was set by using the improved pulse coupled neural network, and the high frequency coefficient of the fused image was selected according to the number of firings of the neuron. The processed coefficient values were inversely transformed by tetrolet frame to obtain the final fusion result. The results show that the proposed method can effectively preserve the edge and detail features of the image to be fused, and the fusion results have better visual effects, which can enhance the observer′s ability to perceive the scene and identify important targets. It is superior to the traditional transform domain fusion method in mutual information, gradient information, structural similarity and visual sensitivity index, especially in terms of structural similarity and gradient retention, leading by 0.033 and 0.025, respectively, and has effectiveness.
    FENG Xin. Fusion of Infrared and Visible Images Based on Tetrolet Framework[J]. Acta Photonica Sinica, 2019, 48(2): 210001
    Download Citation