• Journal of Infrared and Millimeter Waves
  • Vol. 39, Issue 4, 513 (2020)
Han-Lu ZHU1, Xu-Zhong ZHANG2, Xin CHEN3, Ting-Liang HU3, and Peng RAO3、*
Author Affiliations
  • 1Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai200083, China
  • 2Huzhou Center for Applied Technology Research and Industrialization, Chinese Academy of Sciences, Huzhou1000, China
  • 3Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai200083, China
  • show less
    DOI: 10.11972/j.issn.1001-9014.2020.04.016 Cite this Article
    Han-Lu ZHU, Xu-Zhong ZHANG, Xin CHEN, Ting-Liang HU, Peng RAO. Dim small targets detection based on horizontal-vertical multi-scale grayscale difference weighted bilateral filtering[J]. Journal of Infrared and Millimeter Waves, 2020, 39(4): 513 Copy Citation Text show less

    Abstract

    In order to effectively detect weak and small infrared targets under complex background, a single-frame method based on horizontal-vertical multi-scale grayscale difference (HV-MSGD) is proposed to enhance weak targets, and the strong edges of background are suppressed by the difference between the distance and grayscale values. There is discontinuity between the target area and the surrounding area. To strengthen their differences, HV-MSGD combined with bilateral filtering (BF) can increase the intensity of the target while suppressing the background. Candidate targets are further extracted by adaptive local threshold segmentation and global threshold segmentation. In order to further verify the impact on single-frame detection, the above-mentioned single-frame detection algorithm is combined with an improved untraced Kalman particle filter (UPF) to implement trajectory detection. The experimental results show that this method is better than other methods under weak signal-to-noise ratio (SNR). It can enhance the target while suppressing the background, and the enhancement effect is 6-30 times that of other methods. In the experiments, the input signal-to-noise ratios were 2.78, 1.77, 1.79, 1.13, and 1.16, respectively. After image processing, the background suppression factors (BSFs) are 13.48, 21.33, 11.73, 20.63, and 121.92, and the signal-to-noise ratio gains (GSNRs) are 40.09, 71.37, 27.53, 12.65, and 131, respectively. The probability of detection (Pd) of this method is also superior to other algorithms. When the false alarm rates (FARs) are 5×10-4, 1×10-3, 1×10-3, 1×10-5, and 7×10-6, the Pd values of the five sets using real sequence images are calculated to be 94.4%, 92.2%, 91.3%, 95.6% and 96.7% respectively.

    Introduction

    Prior knowledge of the shape, size, and texture of dim small target is almost non-existent which limits the development of infrared searching and tracking systems [1-5]. In general, the background is correlated in both the spatial and temporal domains and primarily occupies low-frequency components in the infrared image, whereas the target is less correlated with background and noise, and mainly occupies high frequency components in the image. When such a target is detected under the space-based platform, the background is generally complex with the small target typically appearing against a background of clouds or water, causing the target to become submerged in clutter and background. In situations such as these, it is difficult to separate the target from the complex background, and the filtering process includes a large amount of edge information. This led to the proposal of algorithms capable of protecting the edges while suppressing the background [6-8], such as anisotropy (PM) [9-11] and bilateral filter (BF); [12-14] however, these algorithms do not have the ability to separate the background gradient and target gradient. Dim small targets are discontinuous with their neighboring regions in the image and are concentrated in relatively small areas, which can be considered as uniform and dense regions. The discontinuity is essentially based on the average grayscale difference between adjacent pixels. [15] Researchers proposed local measurement methods to establish different regions and surrounding regions in the image, such as the local entropy operator [15-21] and local mutation weighted entropy, [22-23] which can effectively measure the target area and surrounding area. However, the difference between the measured object and the background grayscale information did not allow the separability of the background gradient and target gradient to be improved.

    Based on the above analysis, this study involved the design of a horizontal-vertical multi-scale grayscale difference (HV-MSGD) weighted bilateral filtering (BF) method to detect dim small targets using infrared imaging. The method has the following advantages: (1) the method measures the discontinuity between the target area and background area by comparing the regional standard deviation to determine the size of the window; (2) the HV-MSGD weighted operator is used to realize the expansion of the difference between the target and background area by using the size of the window to achieve target enhancement; (3) background edge information is suppressed in consideration of the difference between the distance and grayscale value of each pixel in the image; (4) the combination of global and local threshold segmentation (GLTS) prevents extremely strong signals such as those of noise and clutter to influence the target itself, which eliminates noise elements and detects the real target signal. In other words, HV-MSGD is combined with BF to improve the separability of the target and background gradient, thereby increasing the energy of the target while suppressing the edge information in the image. Experiments show that this method is superior to other algorithms in terms of its ability to detect weak targets.

    1 Single frame detection of dim targets

    1.1 Target signal enhancement—HV-MSGD

    In an infrared image, the grayscale of the target pixels differs greatly from that of the surrounding pixels with a large discontinuity in the luminance. This discontinuity fundamentally determines that the nature of the average grayscale can be obtained from the pixels adjacent to the target. [24] This led researchers to use multi-scale grayscale methods to quantitatively analyze images to distinguish the difference between the target and the surrounding area.[25-27] These methods mainly rely on image entropy to obtain multi-scale grayscale difference, the amount of calculation is huge, and the effect on weak targets is not obvious. These attempts encouraged us to develop an improved multi-scale grayscale difference method to enhance the weak signals and to overcome situations in which the thermal intensity measurements of the target are similar to the those of the background.

    Small and weak targets are concentrated in a small, uniform and compact area, which has discontinuity with the surroundings. Therefore, this paper designs a method to calculate this discontinuity and achieve enhancement of weak targets. The standard deviation of the image reflects the clutter fluctuations of the images, so the appropriate window size can be selected to obtain the MSGD of the image by comparing the LG-SGD.

    The image is traversed from top to bottom and left to right, and the area is divided into neighborhoods surrounding the pixels. As shown in Fig. 1, taking as an example, , and are three differential regions surrounding the central pixel, and the sizes of their windows are , and , respectively. The grayscale average of the th region is expressed as:

    vNk=1nNk(m,n)ϵNkI(m,n)

    Figure 1.

    where is the number of differential regions with values k=1,2,3, ..., k, the set represents the differential region, represents the number of pixels in region , and represents the grayscale of pixels in the region . In actual operation, the sizes of windows 3, 5, 7, 9, and 11 are primarily used for calculations and selections. Then the local standard deviation in the kth region is as follows:

    Slocalk=1nNk(m,n)ϵNk(I(m,n)-vNk)2

    According to Eq. 2, for a heterogeneous region, the local grayscale standard deviation is large when the window is small, whereas the local grayscale standard deviation is small for a homogeneous region in the same window. Figure 2 shows the response of different boundaries at different scales, with the peak response of the horizontal or vertical boundary appearing in a large window. In contrast, a strong peak response occurs at the point target in a small window, indicating that a small-scale window affects the intensity of the point target, but has no effect on the boundary. Thus, selection of a window of the appropriate size can enhance the signal of the target. The size of the window can be determined by comparing the standard deviation within a region:

    Wsize=argminWsize (Slocal1,Slocal2,, Slocalk)

    The response of different boundaries at different scales

    Figure 2.The response of different boundaries at different scales

    The determined window size is used to obtain the grayscale value of the predicted point by using the HV-MSGD, and enhancement of the target area is achieved by the difference in response between the horizontal and vertical boundaries. The specific implementation is shown in Fig. 3. First, the gradient difference of the vertical grayscale is calculated by Eqs. 1-2, after which these two equations are used to calculate the gradient difference of the horizontal grayscale before the results with the enhanced target is obtained. The specific calculation process is as follows:

    f11fn1f1nfnnVGDi=1n(f1i-1ni=1nf1i)2,,i=1n(fni-1ni=1nfni)2=VGD11,,VGD1nHGDi=1n(VGD1i-1ni=1nVGD1i)2==HVGD(x,y)

    calculate the HV-MSGD weighting operator

    Figure 3.calculate the HV-MSGD weighting operator

    where is the corresponding value of the region, the horizontal grayscale value is obtained after the vertical gradient difference is calculated, then the final target enhancement value is obtained by calculating the horizontal grayscale gradient difference. In summary, the specific processing of this algorithm is shown in the following module:

    ImageInputBFTDLMSPM
    SNRinσinBSFGSNRBSFGSNRBSFGSNR
    Image 12.7830.052.156.568.198.51.733.26
    Image 21.7742.874.0813.1218.727.981.333.12
    Image 31.7959.951.824.336.374.571.774.33
    Image 41.1336.731.388.1911.774.920.691.5
    Image 51.1629.267.120.8931.4614.242.567.12
    ImageLCMNWIEOur method
    BSFGSNRBSFGSNRBSFGSNR
    Image 12.637.74.551.5913.4840.99
    Image 22.638.894.4318.9421.3371.37
    Image 31.612.982.7216.9811.7327.53
    Image 41.091.928.467.8220.6312.65
    Image 54.414.5921.5221.32121.92131

    Table 1. BSF and GSNR of different five images processed by different methods

    1.2 Background estimation—BF

    Bilateral filtering can smooth the image while estimating the background of the image. The filter employs two weights: the filter coefficient determined by the geometric spatial distance and the filter coefficient determined by the difference in grayscale similarity. In the sampling procedure, which considers the relationship between pixels by using the spatial distance and grayscale similarity difference, the two coefficients are expressed as follows:

    cx,y=e-(x2+y2)2σd2

    sx,y=e-(fξ,η-f(x,y))22σs2

    The weight coefficient is composed of the two:

    wdsx,y=cx,ysx,y 

    where and are the bandwidth coefficients of the two filter coefficients, is grayscale gray of the region of , then the background estimation can be obtained:

    hx,y=(N×Nwds(x,y))-1N×Nf(x,y)wds(x,y)

    Subtracting the result of the background estimation from the original image, the background suppression image is obtained as follows:

    outx,y=fx,y-h(x,y)

    where the is the final background suppression image. It is worth noting that the image needs to be normalized during the calculation process. Then the candidate target is extracted by GLTS.

    1.3 Threshold segmentation

    The target and its background are discontinuous[15-16]. The image is processed by HV-MSGD weighted BF, which enlarges the discontinuity between the target region and the background region. This method successfully enhances the target, and restrains the background clutter and noise effectively. The overall process to detect a dim small target is illustrated in Fig. 4.

    The detection process of the entire dim small target

    Figure 4.The detection process of the entire dim small target

    The threshold segmentation process adopts a method that combines global and local threshold segmentation to determine dim small targets. The global threshold segmentation is an adaptive threshold segmentation for the entire image and is obtained as follows:

    TG=t×σ+m

    SegG=1    Imgpro(i,j)>TG0    Imgpro(i,j)TG

    where is the standard deviation of the image, m is the average of the image, and is an odd number greater than 3. Then we can get the image of global threshold segmentation by Eq. 11. is the grayscale value of the image processed by HV-MSGD weighted BF at point . Local threshold segmentation divides the image into regions and calculates the th segmentation threshold value for different regions respectively, using Eq. 12:

    TLi= t×σi+mi

    SegL=1    Imgpro(i,j)>TLi0    Imgproi,jTLi

    where is the standard deviation of the th partition region, =1,2,3, …, , and is the average of the th partition region. Then we can get the image of local threshold segmentation by Eq. 13. The algorithm is designed to detect an entire single-frame containing a small target is described in the following module. The two equations use the same threshold .

    Seg=SegGSegL

    IMAGEFrame numberInputBFTDLMSPM
    σin¯SNRin¯BSF¯GSNR¯BSF¯GSNR¯BSF¯GSNR¯
    Seq 17525.722.233.134.361.013.583.12.31
    Seq 2908.360.561.225.722.425.121.284.56
    Seq 36561.219.310.682.558.246.167.37
    Seq 49032.782.092.649.495.717.611.194.01
    Seq 59011.531.171.076.771.24.861.953.89
    Average-17.221.443.127.322.645.822.514.33
    IMAGEFrame numberLCMNWIEOur method
    BSF¯GSNR¯BSF¯GSNR¯BSF¯GSNR¯
    Seq 1753.11.271.986.045.3833.97
    Seq 2901.012.022.409.425.4615.64
    Seq 3654.763.131.3413.7314.7155.18
    Seq 4901.177.943.1710.2523.2681.8
    Seq 5902.941.611.7413.17.4747.51
    Average-2.453.272.1310.5111.2636.47

    Table 2. the average of BSF and GSNR for five sequences

    2 Trajectory detection

    On the basis of the procedure for single-frame detection described in Sec. 2, further verification of the effect of the proposed method, combined with the improved UPF, is conducted by predicting the target position by the probability data association (PDA) and to finally obtain the motion trajectory. According to the actual moving speed of the target, the frame rate and other factors, setting the range of the suspected position to 10 pixels, to solve the tracking accuracy. For the improved UPF, we detect the single-frame according to Sect. 1, to obtain the positions of suspected target points, and then estimate the suspected target points probabilistically to obtain the predictive value, and according to literature [28] we can get the final target position. For the PDA, we can get the associated probability by following:

    βik=ei(k)bk+j=1mkej(k) i=1,2,,mk

    eik=exp {-12Zk-Ẑ(k|k-1)]TX-1kZk-Zk|k-1

    bk=λ(2π)m2|X(k)|12(1-PdPG)/Pd 

    where represents all valid measurement sets that fall within the target tracking gate at time , represents the number of effective measurement at time , is covariance matrix, is the detection probability with a value of 1, is the threshold probability with a value of 0.97. The steps of the process are shown in Fig. 5 and the specific algorithm is described in Algorithm 3:

    Flow chart of trajectory detection

    Figure 5.Flow chart of trajectory detection

    Pd

    Seq 1

    SNRin¯=2.23

    Seq2

    SNRin¯=0.56

    Seq 3

    SNRin¯=1.21

    Seq 4

    SNRin¯=2.09

    Seq 5

    SNRin¯=1.17

    Average

    SNRin¯=1.44

    BF0.92000.94700.87040.92530.95250.9230
    TDLMS0.72020.74270.43420.47650.93340.6614
    LCM0.63740.84390.28710.90560.67380.6696
    PM0.55510.76450.30610.85770.63380.6234
    NWIE0.84770.91670.55780.86140.95070.8269
    Our method0.93200.95770.93490.98470.97610.9571

    Table 3. AUC of five sequences in different algorithms

    3 Analysis of experimental results

    3.1 Experimental environment and images

    The effectiveness of the infrared dim small target detection algorithm based on HV-MSGD weighted BF was verified by using five sets of real infrared image sequences for experimental comparison. The operating environment of this experiment is MATLAB2014b on a Windows 10 (64-bit) system with 2.5 GHz CPU, an Intel Core i5 processor, and 8 GB memory. The parameters used in the experiment were =0.8, =0.3, . The choice of these three parameters needs to be determined according to the images in different scenes. controls the spatial distance, when the value is large, the effective spatial range is larger, and the edge points can also obtain larger weights, and the denoising effect is obvious. controls the grayscale change in the image. The larger grayscale difference, the higher the weight value can be obtained, but the small edge will be affected, and there is no good edge-preserving effect. mainly affects the result of threshold segmentation. When is too small, the false alarm rate will increase. When the value is too large, the true target may be removed, resulting in a missed alarm. Therefore, it is necessary to select according to actual conditions. In this manuscript, these three values are obtained by experiment based on the specific image. Figure 6 shows the results of processing five random images by our method, where (a) is the input image, (b) is the 3D view of the input image, (c) is the image processed by HV-MSGD weighted BF, (d) is the 3D image processed by HV-MSGD weighted BF, (e) is the processed threshold segmentation image, and (f) is the 3D threshold segmentation image. The results in Fig.6 show that the background is obviously suppressed by our method, and the target is strongly enhanced. Even against a background with strong edges, the target is well detected and the threshold segmentation is excellent. In addition, in the experiment, we compared the ability of BF, 9 TDLMS, PM, 12 LCM, 24 NWIE, 26 and our method to detect the target. The resultant images shown in Fig. 7, which shows that the background signal remains strong after processing by the other algorithms, but is significantly suppressed by our method.

    the results of five images that processed by our method, (a) is the input image, (b) is the 3D view of the input image, (c) is the image processed by HV-MSGD, (d) is a 3D image processed by HV-MSGD, (e) is a threshold segmentation image processed by (c), (f) is a 3D threshold segmentation image processed by (c)

    Figure 6.the results of five images that processed by our method, (a) is the input image, (b) is the 3D view of the input image, (c) is the image processed by HV-MSGD, (d) is a 3D image processed by HV-MSGD, (e) is a threshold segmentation image processed by (c), (f) is a 3D threshold segmentation image processed by (c)

    Background suppression results of different algorithms in different scenarios (a) original image, (b) BF filtering result, (c) TDLMS filtering result in literature 15, (d) PM filtering result, (e) LCM filtering result of in literature 24, (f) NWIE, (g) filtering result of our method

    Figure 7.Background suppression results of different algorithms in different scenarios (a) original image, (b) BF filtering result, (c) TDLMS filtering result in literature 15, (d) PM filtering result, (e) LCM filtering result of in literature 24, (f) NWIE, (g) filtering result of our method

    3.2 Evaluation method

    The performance of these methods was compared by using three indicators, i.e., the gain of the signal-to-noise ratio (GSNR), background suppression factor (BSF), and receiver operating characteristic (ROC) curve for the quantitative evaluation of the background suppression and target detection performance. The indicators are defined as follows:

    GSNR= SNRoutSNRin

    BSF= σinσout

    SNR= |μtarget-μbackground|σnoise

    where and are the local SNR of the target before and after background suppression, and are the standard deviations of the local background area of the target before and after background suppression, and are grayscale peak value of the target and background areas, and is the standard deviation of the local background area.

    The evaluation index (Eqs. 18-19) was used to compute BSF and GSNR of the image shown in Fig. 7, and the results are listed in Table 1. The BSF and GSNR results of the images processed by our method are superior to those obtained by other methods. The SNR values used as input were 2.78, 1.77, 1.79, 1.13 and 1.16, respectively. After image processing, the BSF values were 13.48, 21.33, 11.73, 20.63, and 121.92, and the GSNR values were 40.09, 71.37, 27.53, 12.65, and 131, respectively.

    Table Infomation Is Not Enable

    The ROC curve describes the mutual constraint relationship between the probability of detection (Pd) and the false alarm rate (FAR). The value of Pd for different values of FAR can be obtained by changing the detection threshold in Eqs. 10 and 12, where and are the values of Pd and FAR, calculated as follows:

    Pd=NtargetTtarget

    Pf=NpixelTpixel

    where is the number of detected targets, is the number of actual targets, is the number of pixels of the false target, and is the number of all pixel points in the detection images.

    Based on the description of ROC, the ROC values of the five sequences are plotted in Fig. 8, which clearly shows that the values of Pd obtained using our method are more accurate than those obtained with the other algorithms. Five sets of images that formed real sequences were processed using FARs of , , , , and to obtain Pd values of 94.4%, 92.2%, 91.3%, 95.6%, and 96.7%, respectively. In addition, we compared the average values of BSF and GSNR for these five sequences obtained with the different algorithms, and present the results in Table 2. The five sequences contain 75, 90, 65, 90 and 90 images, respectively. Sequence 1 is on thick cloud background, sequence 2 is on bright cloud background, sequence 3 is on bright edge background, and sequence 4 is on blocky cloud background, and sequence 5 is on bright and thick cloud background. In the experiment, the average SNRs were 2.23, 0.56, 1.21, 2.09, and 1.17, respectively. After using our method for image processing, the average BSFs were 5.38, 5.46, 14.71, 23.26, and 7.47, and the GSNRs were 33.97, 15.64, 55.18, 81.8, and 47.51, respectively. The area under curve (AUC) is defined as the area enclosed by the coordinate axis under the ROC curve. Because the AUC can intuitively characterize the superiority or inferiority of each algorithm, we calculated the AUC of five sequences using different algorithms. These results are provided in Table 3. Moreover, we further verified the effectiveness of our method by using the trajectory detection algorithm mentioned in Sect. 2 to compute the trajectory. Figures 9-10 show the trajectory image and bias pixels, which are within two pixels. The conclusion that can be made from Figs. 9-10 are that the proposed method can achieve high probabilities of detection as well as low false alarm rates for different target movements.

    ROC of the five sequences

    Figure 8.ROC of the five sequences

    Table Infomation Is Not EnableTable Infomation Is Not Enable

    Track of five sequences

    Figure 9.Track of five sequences

    Histograms of detected bias pixels obtained by using our method (a) histograms of horizontal detected bias pixels of five sequences, (b) histograms of vertical detected bias pixels of five sequences

    Figure 10.Histograms of detected bias pixels obtained by using our method (a) histograms of horizontal detected bias pixels of five sequences, (b) histograms of vertical detected bias pixels of five sequences

    4 Conclusions

    This study investigated the problem associated with detecting small targets that are weak infrared emitters. Our efforts mainly focused on increasing the discontinuity between the target region and the background region by using HV-MSGD, which improved the separability of the target from the background. In addition, background edge information was suppressed by BF, and the target was finally extracted by the GLTS algorithm. A comparison of the results with those of other methods on five images showed that our method is superior to other methods in both GSNR and BSF, and the enhancement effect is 6∼30 times that of the other methods. In addition, comparison of the average and of five sequences that include a total of 410 images also confirmed our method to be 5-12 times more effective than other methods. Counting the probability of detection of the five sequences also indicated our method to be significantly more accurate than other methods, with and the average detection probability of our algorithms of 95.71%. The method proposed in this paper has a good effect on the image with obvious grayscale fluctuation in adjacent regions. For images with small fluctuations, the method of this paper needs further improvement. In future research, we plan to develop a more effective detection algorithm for multi-target complex background and occlusion based on the single-frame detection method proposed in this paper.

    References

    [1] C Q Gao, D Y Meng, Y Yang. Infrared patch-image model for small target detection in a single image. IEEE Transactions on Image Processing, 22, 4996-5009(2013).

    [2] K P Luo. Space-based infrared sensor scheduling with high uncertainly: Issues and challenges. Syst. Eng., 18, 102-113(2015).

    [3] F Gao, H Li, T Li. Infrared small target detection in compressive domain. Electron. Lett., 50, 510-512(2014).

    [4] C Q Gao, T Q Zhang, Q Li. Small infrared target detection using sparse ring representation. IEEE Aerospace and Electronic Systems Magazine, 27, 21-30(2012).

    [5] X Yang, Y P Zhou, D K Zhou. A new infrared small and dim target detection algorithm based on multi-directional composite window. Infrared Phys. Technol., 71, 402-407(2015).

    [6] X P Shao, H Fan, G X Lu. An improved infrared dim and small target detection algorithm based on the contrast mechanism of human visual system. Infrared Phys. Technol., 55, 403-408(2012).

    [7] P Wang, J W Tian, C Q Gao. Infrared small target detection using directional high pass filters based on LS-SVM. Electron. Lett., 45, 156-158(2009).

    [10] Y Li, Y Song, Y F Zhao. An infrared target detection algorithm based on lateral inhibition and singular value decomposition. Infrared Physics & Technology, 85, 238-245(2017).

    [11] Z M Chen, M C Tian, Y M Bo. Improved infrared small target detection and tracking method based on new intelligence particle filter. Computational Intelligence, 34, 917-938(2018).

    [12] F Zhao, H Z Lu, Z Y Zhang. Complex background suppression based on fusion of morphological Open filter and nucleus similar pixels bilateral filter. Infrared Physics & Technology, 55, 454-461(2012).

    [13] Spatial and temporal bilateral filter for infrared small target enhancement. Infrared Physics & Technology, 63, 42-53(2014).

    [14] H B Tian, E E Department, H V Amp. Infrared small target detection based on bilateral filter and bhattacharyya distance. Nuclear Electronics & Detection Technology, 34, 1159-1163(2014).

    [15] H Deng, Y T Wei, M W Tong. Background suppression of small target image based on fast local reverse entropy operator. IET Computer Vision, 7, 405-413(2013).

    [16] H Deng, J G Liu, Z Chen. Infrared small target detection based on modified local entropy and EMD. Chinese Optical Letters, 8, 24-28(2010).

    [17] C J Li, Y Wei, Z L Shi. A small target detection algorithm based on multi-scale energy cross. IEEE Int. Conf. Robotics Intell. Syst. Signal Process, 2, 1191-1196(2003).

    [18] X Z Bai, Y G Bi. PP(. IEEE Transactions on Geoscience & Remote Sensing, 1-15(2018).

    [19] K Shang, X Sun, J W Tian. Infrared small target detection via line-based reconstruction and entropy-induced suppression. Infrared Physics & Technology, 76, 75-81(2016).

    [20] Z Chen, S Luo, T Xie. A novel infrared small target detection method based on BEMD and local inverse entropy. Infrared Physics & Technology, 66, 114-124(2014).

    [21] Y Mao, M Zheng, W Jia. Analysis of small target detection algorithm based on image gray entropy(2016).

    [22] G H Peng, H Chen, Q Wu. Infrared small target detection under complex background. Advanced Materials Research, 346, 615-619(2011).

    [23] X J Qu, H Chen, G H Peng. Novel detection method for infrared small targets using weighted information entropy. Journal of Systems Engineering and Electronics, 23, 838-842(2012).

    [24] C L Philip, H Li, Y T Wei. A local contrast method for small infrared target detection. IEEE Trans. on Geoscience & Remote Sensing, 52, 574-581(2013).

    [25] H Deng, X P Sun, M L Liu. Entropy-based window selection for detecting dim and small infrared targets. Pattern Recognition, 61, 66-77(2017).

    [26] H Deng, X P Sun, M L Liu. Infrared small-target detection using multiscale gray difference weighted image entropy. IEEE Transactions on Aerospace & Electronic Systems, 52, 60-72(2016).

    [27] G Y Wang. Efficient method for multiscale small target detection from a natural scene. Opt. Eng, 35, 761-768(1996).

    [28] X P Huang, Y Wang. Kalman filter principle and application: MATLAB simulation. Publishing House of Electronics Industry(2015).

    Han-Lu ZHU, Xu-Zhong ZHANG, Xin CHEN, Ting-Liang HU, Peng RAO. Dim small targets detection based on horizontal-vertical multi-scale grayscale difference weighted bilateral filtering[J]. Journal of Infrared and Millimeter Waves, 2020, 39(4): 513
    Download Citation