• Acta Optica Sinica
  • Vol. 34, Issue 10, 1010002 (2014)
Wang Zhishe1、2、*, Yang Fengbao1, Chen Lei1, Peng Zhihao1, and Ji Li′e1
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.3788/aos201434.1010002 Cite this Article Set citation alerts
    Wang Zhishe, Yang Fengbao, Chen Lei, Peng Zhihao, Ji Li′e. SAR and Visible Image Enhanced Fusion Based on Texture Segmentation and Top-Hat Transformation[J]. Acta Optica Sinica, 2014, 34(10): 1010002 Copy Citation Text show less

    Abstract

    To overcome the disadvantages of low contrast and missing target information for the existing synthetic aperture radar (SAR) and visible image fusion methods, an image enhanced fusion algorithm based on texture segmentation and top-hat transformation is proposed. Entropy texture image which is generated by gray level co-occurrence matrix of SAR image is segmented by threshold, and the region of (ROI) interest of SAR is extracted. The SAR and optical images are decomposed by the non-subsampled contourlet transform (NSCT). A region fusion rule is introduced to the low-frequency coefficient, and low-frequency coefficient of SAR is chosen in the region of interest. The significant bright and dark image detail features are extracted by top-hat transformation and the low-frequency synthetic coefficient is obtained through adding the above bright and dark features into low-frequency coefficient. High-frequency subband coefficients are fused by selecting maximum significant factor of local directional entropy. The fused image is obtained by the NSCT inverse transformation of the fused coefficient. The experiment results testify the validity of the proposed image fusion algorithm.
    Wang Zhishe, Yang Fengbao, Chen Lei, Peng Zhihao, Ji Li′e. SAR and Visible Image Enhanced Fusion Based on Texture Segmentation and Top-Hat Transformation[J]. Acta Optica Sinica, 2014, 34(10): 1010002
    Download Citation