• Journal of Infrared and Millimeter Waves
  • Vol. 39, Issue 6, 810 (2020)
Zhi-Jian LI1、2, Feng-Bao YANG1、*, Yu-Bin GAO3, Lin-Na JI1, and Peng HU1
Author Affiliations
  • 1School of Information and Communication Engineering, North University of China, Taiyuan030051, China
  • 2North University of China, Shuozhou, Shuozhou036000, China
  • 3Department of Mathematics, North University of China, Taiyuan030051, China
  • show less
    DOI: 10.11972/j.issn.1001-9014.2020.06.021 Cite this Article
    Zhi-Jian LI, Feng-Bao YANG, Yu-Bin GAO, Lin-Na JI, Peng HU. Fusion method for infrared and other-type images based on the multi-scale Gaussian filtering and morphological transform[J]. Journal of Infrared and Millimeter Waves, 2020, 39(6): 810 Copy Citation Text show less

    Abstract

    To ensure the fusion quality and efficiency simultaneously, a novel image fusion method based on multi-scale Gaussian filtering and morphological transform is proposed. The multi-scale Gaussian filtering is designed to decompose the source images into a series of detail images and approximation images. The multi-scale top- and bottom-hat decompositions are used respectively to fully extract the bright and dark details of different scales in each approximation image. The multi-scale morphological inner- and outer-boundary decompositions are constructed to fully extract boundary information in each detail image. Experimental results demonstrate that the proposed method is comparable to or even better in comparison with typical multi-scale decomposition-based fusion methods. Additionally, the method operates much faster than some advanced multi-scale decomposition-based methods like NSCT and NSST.

    Introduction

    Visible, infrared, and infrared polarization images individually captured by different sensors present complementary information of the same scene, and they could be combined by the image fusion technology to obtain a new, more accurate, comprehensive, and reliable image description of the scene 1. The fusion methods are varied with image sources, fusion requirements, or purposes 2-4. In general, the fusion methods can be classified into pixel-, feature- and decision-level. Compared with the latter two fusion levels, the first fusion level can maintain the source image data as much as possible, so it plays an important role in most image processing tasks. Major pixel-level image fusion methods can be put into four groups according to their adopted theories 5, namely multi-scale decomposition-based methods, the sparse representation-based methods, methods in other domains, and methods combining different transforms. For the multi-scale decomposition-based methods the decomposition schemes and fusion rules are two aspects that affect fusion quality and efficiency.

    For the decomposition schemes, various methods are proposed, like the discrete wavelet transform (DWT) 6, dual-tree complex wavelet transform (DTCWT) 7, stationary wavelet transform (SWT) 8, wavelet packet transform (WPT) 9, non-subsampled contourlet transform (NSCT) 10-11, and non-subsampled shearlet transform (NSST) 12. And many practices proved that NSCT and NSST usually outperform other multi-scale decomposition-based methods in representing 2-D singular signals contained in digital images 13. But the design of multi-directional filter banks for NSCT and NSST is relatively complex and computational time-consuming, which greatly reduces the efficiency of image fusion.

    Fusion rules generally include low and high frequency coefficients fusion rules. The AVG-ABS rule is a simple fusion rule which uses the average rule to combine low-frequency coefficients and uses the absolute maximum rule to combine the high-frequency coefficients. The AVG-ABS fusion rule is computed easily and implemented simply, however, it always causes distortions and artifacts 14-15. To overcome these shortcomings and improve the fusion quality, a large number of rules have been proposed 15-20. These rules in Refs.15-20 have achieved satisfactory results, but they have the disadvantage of high computational complexity.

    To ensure both the fusion quality and computational efficiency simultaneously, a novel multi-scale decomposition-based fusion method with dual decomposition structures is proposed. Our method is dedicated to improving the image fusion quality and efficiency from the aspect of image decomposition scheme, while for the rule aspect, our method only uses simple AVG-ABS rule. Firstly, inspired by the idea of constructing octaves in SIFT21 and SURF22 algorithms, the source images are decomposed into a series of detail and approximation images by multi-scale Gaussian filters to construct the undecimated pyramid structures. The multi-scale Gaussian filters have increasing standard deviation as well as up-scaling size. Secondly, for the approximation images, i.e., the top layers of the undecimated pyramid structures, multi-scale morphology top- and bottom-hat decompositions 23-24 are used to fully extract bright and dark details of different scales on the background, and then the contrast of the fused layer is improved by the absolute maximum rule. Thirdly, the multi-scale morphological inner- and outer-boundary decompositions are especially constructed based the idea of constructing multi-scale top- and bottom-hat decompositions. For each detail image, these two morphology decompositions are implemented to extract the boundary information. And then the decomposed coefficients are combined by the approach of choosing absolute maximum. At last, the fused image is reconstructed through taking the inverse transforms corresponding to the decompositions mentioned.

    1 Related theories and work

    1.1 The pyramid transforms

    The theory and mathematical representation for constructing multiresolution pyramid transform scheme are presented by Ref.25 and extended by Ref.26. A domain of signals is assigned at each level, the analysis operators maps an image to a higher level in the pyramid, while the synthesis operator maps an image to a lower level in the pyramid, i.e. and . The detail signal contains information of x which does not exist in , where and is a subtraction operator mapping into the set . The decomposition process of an input image f is expressed as Eq.1:

    f0y1,f2y1,y2,,yj,fj+1

    where

    f0=fV0fj+1=ψjfjVj+1,yj=fj-ψjfj+1Yj j0

    And the reconstruction process through the backward recursion is expressed as Eq.3:

    f=f0, fj=ψjfj+1+yj, j0

    Eq.1 and Eq.3 are called the pyramid transform and the inverse pyramid transform respectively.

    1.2 Scale space representation and multi-scale Gaussian filtering

    The scale space of an image can be generated through convolving the image with Gaussian filters, and it has been successfully applied in SIFT 21 to detect key points which are invariant to scales. In Ref.26 the scale space is divided into octaves. For each octave, the initial image is iteratively convolved with Gaussians with increasing standard deviation to generate a set of scale space images (Gaussian images), and one of the Gaussian images is downsampled to obtain the initial image of the next octave. Then, the Difference of Gaussians (DoG) images are obtained by subtracting adjacent Gaussian images. In SURF 22, in order to omit the down-sampling step, the scale space is obtained by increasing the size of filter.

    Inspired by the above algorithms, we have the source image repeatedly convolved with Gaussian filters whose standard deviation and size increase simultaneously to construct undecimated pyramid structure. Then, the DoG images are produced by subtracting adjacent Gaussian images. Accordingly, the transform scheme of such pyramid is given by Eq.4:

    f0=fV0fj+1=Gj*fjVj+1,yj=fj-fj+1Yj    j0

    where

    Gj=1/2πσj2exp -x2+y2/2σj2

    is the Gaussian kernels (filters) with a size of in this paper, and denotes the convolution operation. The parameter is the standard deviation, which is increasing with , and in this paper . Then the source image f can be decomposed into an approximation image and a set of detail images as shown in scheme 1, and it can also be exactly reconstructed through the following recursion:

    f=f0

    The four-level decomposition scheme is illustrated in Fig.1.

    Example of four-level decomposition by multi-scale Gaussian filtering

    Figure 1.Example of four-level decomposition by multi-scale Gaussian filtering

    1.3 Multi-scale morphological transforms

    The multi-scale top-hat transform using structuring elements with up-scaling size can extract the light and dark details at different image scales in image fusion 24. Based the idea of constructing multi-scale top-hat transform, the multi-scale morphological inner-boundary transform is constructed. These two kinds of morphological transforms can be expressed as Eq.4, with the Gaussian kernel being replaced by morphological opening operation and erosion operation , respectively. For purpose of extracting details of different scales, the scale of structuring element increases with j. The inverse transforms can be expressed as Eq.6.

    The multi-scale morphological bottom-hat transform and its inverse are shown as follows

    f0=fV0fj+1=Closebj(fj)Vj+1,yj=fj+1-fjYj    j0

    f=f0, fj=fj+1-yj,   j0

    where the analysis operator is morphological closing operation , with also increasing with . The morphological outer-boundary transform and its inverse are similar to the bottom-hat transform and its inverse, with being replaced by dilation operation .

    2 Proposed method framework

    The proposed fusion method comprises three processes that are multi-scale decomposition, fusion, and reconstruction.

    2.1 Multi-scale decomposition process

    The K-level decomposition of a given source image by the scheme (4) has the form

    fy1,y2,,yk,,yK,f(K+1)

    where represents the detail image at level and denotes the approximation image of this multi-scale structure.

    is a coarse representation of and usually inherit a few bright and dark details, thus the multi-scale top- and bottom-hat decompositions are used to extract bright objects on a dark background and dark objects on a bright background of different scales, respectively. Henceforth, can be decomposed by schemes mentioned in subsection 1.3 as

    fK+1yt(1),yt(2),,ytl,,yt(M),ft(M+1)yb(1),yb(2),,ybl,,yb(M),fb(M+1)

    where and represent the detail images at level l obtained by the top- and bottom-hat decomposition process, respectively. And and denote the approximation images of the multi-scale top- and bottom-hat structure, respectively. Figure 2 is given as an example of three-level top- and bottom-hat decompositions.

    Figure 2.

    The detail image in scheme 9 comprises various details like edges and lines, thus the multi-scale inner- and outer-boundary transforms mentioned in subsection 1.3 are used to extract inner-boundary as well as outer-boundary information of different scales. Hence, can be decomposed as

    ykyi(k,1),yi(k,2),,yik,l,,yi(k,Nk),fi(k,Nk+1)yo(k,1),yo(k,2),,yok,l,,yo(k,Nk),fo(k,Nk+1)

    where and represent the detail images at level l of that are obtained by the inner- and outer-boundary decomposition process, respectively. and are the approximation images of at the highest level of the multi-scale inner- and outer-boundary structure, respectively. Figure 3 gives an example of three-level inner- and outer-boundary decompositions.

    Figure 3.

    2.2 Fusion process

    In this paper, the composite approximation coefficients of the approximation image in the multi-scale top- and bottom-hat structures take the average of the approximation of the sources. For the composite detail coefficients of the detail images, the absolute maximum selection rule is used.

    2.2.1 Fusion rules for the multi-scale top- and bottom-hat structures

    The vector coordinate is used here to denote the location of an image. For instance, represents the detail coefficient for the multi-scale top-hat structure at location within level l of source image A. And the notation will be used to denote an image, e. g., refers to the detail image.

    The arbitrary fused detail coefficient and the fused approximation coefficient of the multi-scale top-hat structure are obtained through

    ytln|F=max ytln|A,ytln|B                      ftM+1n|F=αtftM+1n|A+βtftM+1n|B          

    The weights and take 0.5, which preserves the mean intensity of the two source images. Likewise, and of the multi-scale bottom hat structure are obtained through

    ybln|F=max {ybln|A,ybln|B}fb(M+1)n|F=αbfb(M+1)n|A+βbfb(M+1)n|B

    with .

    The selective rule in Eq.12 means that we choose the brighter ones in the bright details, and the selective rule in Eq.13 means that we choose the darker ones in the dark details. In this way, the bright and dark details of different scales can be fully extracted and hence the contrast at each layer can be improved.

    2.2.2 Fusion rules for the multi-scale inner- and outer-boundary structures

    For an arbitrary fused detail coefficient of the multi-scale inner-boundary structures, we only use the absolute maximum selection rule:

    yi(k,l)n|F=yi(k,l)n|A    if yi(k,l)n|A>yi(k,l)n|Byi(k,l)n|B    otherwise                                

    So is the fused approximation coefficient . In such way, the boundary information such as edges and lines of different scales can be well preserved. Likewise, arbitrary and of the multi-scale outer-boundary structures are also obtained by the absolute maximum selection rule.

    2.3 Reconstruction process

    According to Eqs.6 and 8, the reconstruction of the approximation image can be obtained through the multi-scale top- and bottom-hat inverse transforms as

    fK+1|F=γM+1ft(M+1)|F+l=0Mγlytl|F/2+κM+1fb(M+1)|F-l=0Mκlybl|F/2       

    which means both bright and dark information are of equal importance to the source image. In addition, we attach equal importance to the features of different scale levels, thus the weights in Eq.15 are set to be .

    Similarly, inner- and outer-boundary information are considered to be equally important to the source image, and so are the features of different scale levels. Thus, according to Eqs.6 and 8, the reconstruction of an arbitrary detail image through the multi-scale inner- and outer-boundary inverse transforms can be obtained as

    yk|F=fik,Nk+1|F+l=1Nkyik,l|F/2+fo(k,Nk+1)|F-l=1Nkyok,l|F/2

    At last, the fused image can be reconstructed by

    f|F=f0|F=fK+1|F+k=1Kyk|F

    3 Experiments

    3.1 Experimental setups

    In order to validate the performance of the proposed method, experiments are conducted on two categories of source images including ten pairs of infrared-visible images (Fig.4(a)) and eight pairs of infrared intensity-polarization images (Fig.4(b)). The two source images in each pair are pre-registered and the size of each image is set to 256×256 pixels. The experiments in this paper are programmed by Matlab 2016b and run on an Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz Desktop with 16.0 GB RAM.

    The two kinds of source images (a) infrared-visible images, (b) infrared intensity-polarization images

    Figure 4.The two kinds of source images (a) infrared-visible images, (b) infrared intensity-polarization images

    Various pixel-level multi-scale decomposition-based methods including DWT, DTCWT, SWT, WPT, NSCT, and NSST are compared with the proposed method. All the compared methods adopt the simple AVG-ABS rule. According to Ref.13, most of the methods mentioned above perform well when the decomposition levels for them are set to 3. Thus, for purpose of making reliable and persuasive comparisons, the decomposition levels for the methods mentioned above are all set to 3. And to make each method achieve a good performance the other parameters are also suggested by Ref. 13, some of which are listed in Table 1.

    MethodsPyramid filterFilterLevels
    DWTrbio1.33
    DTCWT5-7q-63
    SWTbior1.33
    WPTbior1.33
    NSCTmaxflatdmaxflat54,8,16
    NSSTmaxflat4,8,16

    Table 1. The parameters set in the compared methods. ‘Filter’ represents the Orientation filter; ‘Levels’ denotes the decomposition levels and the corresponding number of orientations for each level.

    For NSST, the size of the local support of shear filter at each level are selected as 8, 16, 32. As for the proposed method, the parameters and k for the multi-scale Gaussian filtering process in Eq. 5 are selected experimentally. In this experiment, the source images are decomposed by 3-layer multi-scale Gaussian decomposition, and different fused images are obtained by changing the parameters and k. During the fusion process, the AVG-ABS rule is also adopted. When and k are in certain value, every fusion image will be evaluated by seven objective assessment metrics (mentioned in subsection 3.2). For each metric, its mean value is obtained by averaging the evaluation results of the fusion images. Then, seven mean values are summed to get the sum values of objective metrics. Figure 5 gives three surface plots which show variations of the sum of the seven metrics with and k. As shown in Fig.5, the optimal values of and k for the four kinds of images are obtained. The structuring elements in the multi-scale inner- and outer-boundary decompositions are selected as square, and in the multi-scale top- and bottom-hat decompositions they are chosen to be disk. and k in Eq. 5 and the parameters KMN1N2, and N3 in schemes 9, 10, and 11 are set as shown in Table 2 to make the proposed method achieve a good performance.

    Figure 5.

    Source imagesParameters
    σ0k[K, M, N1, N2, N3]
    Infrared-visible0.61.4[3,2,0,1,2]
    Infrared intensity-polarization0.61.1[3,2,1,1,2]

    Table 2. The parameters of the proposed method for the four kinds of source images.

    3.2 Objective assessment metrics

    Seven representative metrics, i.e., Q027, QE28, QAB/F29, information entropy (IE) 30, mutual information (MI) 31, Tamura contrast (TC) 32, and visual information fidelity (VIF) 33 are employed to evaluate the proposed method comprehensively. The variable in TC is chosen to be 4.

    3.3 Experimental results

    3.3.1 Subjective assessment

    In this section, the subjective assessment of the fusion methods is done by comparing the visual results obtained from the above and proposed methods. One sample pair in each type of source images are selected for visual comparison as shown in Figs.6 and 7.

    Fusion results of one pair of the infrared-visible images (a) infrared image, (b) visible image, (c)-(i) the fusion results of the DWT, DTCWT, SWT, WPT, NSCT, NSST, and the proposed methods.

    Figure 6.Fusion results of one pair of the infrared-visible images (a) infrared image, (b) visible image, (c)-(i) the fusion results of the DWT, DTCWT, SWT, WPT, NSCT, NSST, and the proposed methods.

    Fusion results of one pair of the infrared intensity-polarization images (a) Infrared intensity image, (b) Infrared polarization image, (c)-(i) the fusion results of the DWT, DTCWT, SWT, WPT, NSCT, NSST, and the proposed methods.

    Figure 7.Fusion results of one pair of the infrared intensity-polarization images (a) Infrared intensity image, (b) Infrared polarization image, (c)-(i) the fusion results of the DWT, DTCWT, SWT, WPT, NSCT, NSST, and the proposed methods.

    In Fig.6, both the DWT and WPT methods distort the edges of the roof, which was shown clearly in magnified squares. The DTCWT, SWT, NSCT and NSST methods produce artificial edges in the sky around the roof, while the result obtained by the proposed method is free from such artifacts or brightness distortions. In addition, the walls and the clouds in the sky in Fig.6(i) are brighter those in Fig.6(g) and (h), which means that the fused image of the proposed method has better contrast.

    The edges of the car are distorted heavily in Fig.7 (f), and slightly distorted in Figs.7(c-e) which is shown more clearly in the corresponding regions in magnified square. And Figs.7(c-h) show some artifacts around the edges of the car. However, in Figs.7(i) there are no distortions or certain artifacts. In addition, the car in magnified square of Fig.7(i) is the darker than those in Figs.7(h) and (i), which demonstrate that the proposed method has better contrast.

    The above experiments confirm that the proposed method performs better in visual effect for the two categories of source images. Although adopting the simple AVG-ABS rule, the proposed method does not generate certain artifacts or distortions and simultaneously preserves the detail information of source images as much as possible.

    3.3.2 Objective assessment

    The objective assessment of the seven multi-scale decomposition-based methods are shown in Tables 3. For the infrared-visible images, the proposed method performs the best on all the seven metrics. For the infrared intensity-polarization images, the proposed method performs the best on the other five metrics except Q0 and QE on which it performs the second best. It can also be obtained from Tables 3 that compared with the seven methods, the proposed method always has the best assessment on metrics QAB/F, IE, MI, TC, and VIF. It means that the proposed method can transfer the original information of source image including the edges and brightness details to the fused image sufficiently, and improve the contrast of the fused image.

    ImagesMethodsQ0QAB/FQEIEMITCVIF
    Infrared-visibleDWT0.439 10.485 80.226 86.660 12.165 80.258 80.293 6
    DTCWT0.444 60.517 30.257 96.683 02.223 50.293 70.294 9
    SWT0.445 20.509 70.245 76.615 52.187 20.220 30.278 4
    WPT0.407 90.395 20.161 46.638 52.194 90.274 50.273 8
    NSCT0.466 90.528 10.259 56.696 12.263 30.294 00.314 5
    NSST0.465 30.523 10.257 06.685 82.257 50.290 20.310 3
    Proposed0.475 70.535 60.268 96.735 92.470 70.317 70.362 6
    Infrared intensity-polarizationDWT0.385 30.420 60.167 66.478 22.266 40.347 60.219 6
    DTCWT0.394 40.458 50.208 96.570 72.341 50.468 40.243 7
    SWT0.387 50.439 10.193 16.473 02.342 90.330 80.230 0
    WPT0.346 90.343 90.119 86.405 22.291 70.443 70.197 2
    NSCT0.413 30.467 50.197 76.564 62.391 70.458 50.257 4
    NSST0.413 80.464 10.199 56.574 02.389 80.459 70.259 2
    Proposed0.413 40.469 00.201 36.658 02.624 10.547 80.313 7

    Table 3. Objective assessment of all methods (the best result of each metric is highlighted in bold).

    3.3.3 Comparison of computational efficiency

    To verify the efficiency of the proposed method, an experiment is conducted on the image sequences named as “Nato_camp”, “Tree”, and “Dune” from the TNO Image Fusion Dataset 34. Table 4 shows the average processing time of all methods for a frame. Compared with the DWT, DTCWT, SWT, and WPT methods, the proposed method is more time-consuming because these four methods contain one types of multi-scale decomposition while the proposed method contains two, i.e., the multi-scale decomposition using multi-scale Gaussian filtering and the multi-scale morphological decomposition, as mentioned in Sec.2. Compared with the NSCT and NSST methods which also contain two kinds of multi-scale decomposition, the proposed method is far more efficient mainly because the design of the multi-directional filter banks for NSCT and NSST is relatively complex and the processing speed of multi-directional filtering is much lower than that of multi-scale morphological operations.

    Image sequencesDWTDTCWTSWTWPTNSCTNSSTProposed
    Nato_camp0.018 00.036 20.064 70.140 124.517 32.307 20.141 9
    Tree0.016 50.035 70.064 30.139 824.821 52.292 30.141 1
    Duine0.017 10.036 10.064 10.140 624.584 12.288 10.141 2

    Table 4. Average processing time (unit: sec.) comparison of eight methods. Each value represents the average run time of a frame in a certain sequence.

    4 Conclusions

    Experiments on both visual quality and objective assessment demonstrate that although adopting the simple AVG-ABS rule, the proposed method does not generate certain artifacts or distortions and performs very well in aspects like information preservation and contrast improvement. Under the premise of ensuring image fusion quality, the proposed method is also proved computationally efficient. The proposed method provides an option for the fusion situations needing both high quality and particularly computational efficiency, such as fast high-resolution images fusion and video fusion.

    References

    [1] J Ma, Y Ma, C Li. Infrared and visible image fusion methods and applications: A survey. Information Fusion, 45(2018).

    [2] J Xin, J Qian, S Yao. A Survey of infrared and visual image fusion methods. Infrared Physics & Technology, 85, 478-501(2017).

    [3] F Yang, H Wei. Fusion of infrared polarization and intensity images using support value transform and fuzzy combination rules. Infrared Physics & Technology, 60, 235-243(2013).

    [4] P Hu, F Yang, H Wei. Research on constructing difference-features to guide the fusion of dual-modal infrared images. Infrared Physics & Technology, 102, 102994(2019).

    [5] S Li, X Kang, L Fang. Pixel-level image fusion: A survey of the state of the art. Information Fusion, 33, 100-112(2017).

    [6] K Amolins, Y Zhang, P Dare. Wavelet based image fusion techniques: An introduction, review and comparison. Isprs Journal of Photogrammetry & Remote Sensing, 62, 249-263(2007).

    [7] I W Selesnick, R G Baraniuk, N C Kingsbury. The dual-tree complex wavelet transform. IEEE Signal Processing Magazine, 22, 123-151(2005).

    [8] D Singh, D Garg, H S Pannu. Journal of Photographic Science, 65, 108-114(2017).

    [9] B Walczak, B V D Bogaert, D L Massart. Application of wavelet packet transform in pattern recognition of near-IR data. Analytical Chemistry, 68, 1742-1747(1996).

    [10] A L Da Cunha, J Zhou, M N Do. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing, 15, 3089-3101(2006).

    [11] Z Zhu, M Zheng, G Qi. A phase congruency and local laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access, 7, 20811-20824(2019).

    [12] Y Ming, L Wei, Z Xia. A novel image fusion algorithm based on nonsubsampled shearlet transform. Optik - International Journal for Light and Electron Optics, 125, 2274-2282(2014).

    [13] S Li, B Yang, J Hu. Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12, 74-84(2011).

    [14] S Li, X Kang, J Hu. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22, 2864-2875(2013).

    [15] J Du, W Li, B Xiao. Anatomical-functional image fusion by information of interest in local laplacian filtering domain. IEEE Transactions on Image Processing, 26, 5855-5865(2017).

    [16] G Bhatnagar, J Wu, Z Liu. Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Transactions on Multimedia, 9, 1014-1024(2013).

    [17] J Gong, B Wang, Q Lin. Image fusion method based on improved NSCT transform and PCNN model(2016).

    [18] T Ma, M Jie, F Bin. Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis. Infrared Physics & Technology, 92, 154-162(2018).

    [19] M Yin. . Medical image fusion with parameter-adaptive pulse coupled-neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation & Measurement, 68, 1-16(2018).

    [20] Y Li, Y Sun, X Huang. An image fusion method based on sparse representation and sum modified-laplacian in NSCT domain. Entropy, 20, 522(2018).

    [21] D G Lowe. Distinctive image features from scale-invariant key points. International Journal of Computer Vision, 60, 91-110(2004).

    [22] H Bay, A Ess, T Tuytelaars. Speeded-up robust features (SURF). Computer Vision & Image Understanding, 110, 346-359(2008).

    [23] S Mukhopadhyay, B Chanda. Fusion of 2D grayscale images using multiscale morphology. Pattern Recognition, 34, 1939-1949(2001).

    [24] X Bai, S Gu, F Zhou. Multiscale top-hat selection transform based infrared and visual image fusion with emphasis on extracting regions of interest. Infrared Physics & Technology, 60, 81-93(2013).

    [25] J Goutsias, H M Heijmans. Nonlinear multiresolution signal decomposition schemes--part I: morphological pyramids. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 9, 1862-1876(2000).

    [26] G Piella. A general framework for multiresolution image fusion: from pixels to regions. Information Fusion, 4, 259-280(2003).

    [27] Z Wang, A C Bovik. A universal image quality index. IEEE Signal Processing Letters, 9, 81-84(2002).

    [28] G Piella, H Heijmans. A new quality metric for image fusion(2003).

    [29] R Hong. Objective image fusion performance measure. Military Technical Courier, 56, 181-193(2000).

    [30] W J Roberts, J A A Van, F Ahmed. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2, 1-28(2008).

    [31] G Qu, D Zhang, P Yan. Information measure for performance of image fusion. Electronics Letters, 38, 313-315(2002).

    [32] H Tamura, S Mori, T Yamawaki. Textural features corresponding to visual perception. IEEE Trans.syst.man.cybernet, 8, 460-473(1978).

    [33] Y Han, Y Cai, Y Cao. A new image fusion performance metric based on visual information fidelity. Information Fusion, 14, 127-135(2013).

    [34] http://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029

    Zhi-Jian LI, Feng-Bao YANG, Yu-Bin GAO, Lin-Na JI, Peng HU. Fusion method for infrared and other-type images based on the multi-scale Gaussian filtering and morphological transform[J]. Journal of Infrared and Millimeter Waves, 2020, 39(6): 810
    Download Citation