• Journal of Infrared and Millimeter Waves
  • Vol. 43, Issue 2, 254 (2024)
Zai-Ping LIN1, Yi-Hang LUO1, Bo-Yang LI1, Qiang LING1, Qing ZHENG2, Jing-Yi YANG3, Li LIU1, and Jing WU1、*
Author Affiliations
  • 1College of electronic science and technology,National University of Defense Technology,Changsha 410073,China
  • 2Department of Military Representative Bureau of Aerospace Systems,Beijing 100000,China
  • 3Shanghai Institute of Satellite Engineering,Shanghai 200000,China
  • show less
    DOI: 10.11972/j.issn.1001-9014.2024.02.015 Cite this Article
    Zai-Ping LIN, Yi-Hang LUO, Bo-Yang LI, Qiang LING, Qing ZHENG, Jing-Yi YANG, Li LIU, Jing WU. Gradient-aware channel attention network for infrared small target image denoising before detection[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 254 Copy Citation Text show less

    Abstract

    Infrared small target denoising is widely used in military and civilian fields. Existing deep learning-based methods are specially designed for optical images and tend to over-smooth the informative image details, thus losing the response of small targets. To both denoise and maintain informative image details, this paper proposes a gradient-aware channel attention network (GCAN) for infrared small target image denoising before detection. Specifically, we use an encoder-decoder network to remove the additive noise of the infrared images. Then, a gradient-aware channel attention module is designed to adaptively enhance the informative high-gradient image channel. The informative target region with high-gradient can be maintained in this way. After that, we develop a large dataset with 3981 noisy infrared images. Experimental results show that our proposed GCAN can both effectively remove the additive noise and maintain the informative target region. Additional experiments of infrared small target detection further verify the effectiveness of our method.

    Introduction

    With the rapid development of infrared imaging technology,the infrared imaging system has been widely used in marine resource utilization,high-precision navigation,and ecological environment monitoring1-6. Since IR imaging device is generally applied to long-range imaging,the imaging quality of infrared imaging system is easily disturbed by terrible environment,which includes internal imaging-device environment (e.g.,thermal noise of amplifiers and detectors) and external natural environment (e.g.,clouds,low-light conditions and atmospheric perturbations)7. Therefore,noises with different characteristics generally interact with each other and perform complex distribution in IR images. To simplify the mixed noise,one common assumption is that the noise in IR images is additive white Gaussian noise (AWGN) with standard deviation 8 . As shown in Fig. 1(a1-a3),IR images in the same scene would be corrupted under different levels of noise caused by the varied conditions of the imaging device and the external environment. The detection results generated by DNANet9 under different levels of noise are shown in Fig. 1(b1-b3). It demonstrates that the additive noise not only introduces the decrease of image quality but also brings obvious performance decrease for the subsequent detection task. To our surprise,as shown in Fig. 1(c-d),our denoising method helps to make the noisy image recover to a clean one and thus alleviate the performance decrease of target detection task.

    (a1)-(a3) Visual results of noisy input images; (b1)-(b3) detected results without denoising; (c1)-(c3) denoised images by our method; (d1)-(d3) detected results with denoising

    Figure 1.(a1)-(a3) Visual results of noisy input images; (b1)-(b3) detected results without denoising; (c1)-(c3) denoised images by our method; (d1)-(d3) detected results with denoising

    To alleviate the negative effect caused by the additive noise,numerous traditional methods have been proposed,including the filtering-based method10,sparse-representation-based methods11-12,and Low-rankness-based method13. Although the above works have achieved promising image denoising results,they are essentially manually-designed methods,which heavily rely on prior knowledge and hand-crafted features. When the characteristics of images (e.g.,signal-to-cluster ratio (SCR)) dramatically change,traditional methods can hardly handle such changeable scenarios with fixed hyper-parameters. More robust solutions should be introduced to tackle such challenges.

    Different from the previous model-driven traditional methods,the convolutional neural network (CNN) can achieve high-performance image denoising in a data-driven manner and has yielded promising results in optical image denoising. Jain et al.14 proposed the first CNN-based denoising method. A four-layer,fully-connected CNN structure was designed to achieve significant improvements over traditional denoising methods. Due to the simple and shallow CNN structure,the denoising performance is limited. Then,Zhang et al. proposed a denoising convolutional neural network (DnCNN)15. DnCNN can remove the latent clean image from noisy observation through a residual learning strategy. Thanks to the powerful representation ability introduced by much deeper CNN layers,DnCNN achieves better noise reduction than the optimal traditional method10 and previous CNN-based methods14-15. After that,Liang et al. designed a strong baseline model SwinIR21 for image restoration based on the Swin Transformer. Higher performance under real noisy scene is achieved. However,the performance improvement is based on the huge number of optical images. The capacity of IR datasets is limited and hard to drive the transformer-based network. Moreover,the IR imaging system is generally used for long-distance imaging to capture small and dim targets which are not easily perceived by optical devices. Therefore,direct transfer of the existing optical denoising method may over-smooth the small targets and thus lose the response of the small target,which is unacceptable for subsequent high-level target detection and recognition tasks.

    To both denoise IR images and maintain the response of small targets,we propose a novel infrared image denoising method named gradient-aware channel attention network (GCAN). We design an encoder decoder-based network with residual connections to remove the additive noise of infrared images. Then,a gradient-based channel attention module (GCAM) is designed and embedded into the residual connection to adaptively enhance the informative high-gradient image channel and thus preserve the informative details. In this way,informative target regions with a high gradient can be preserved and additive noise of IR images is also removed.

    The contributions of this paper can be summarized as follows:

    1) An encoder-decoder denoising framework and a gradient-based channel attention module are proposed to remove the additive noise and adaptively

    enhance the informative image channels,respectively.

    2) We develop an NUDT-IRSTDn dataset with various SCR ratios based on our previous NUDT-SIRST dataset. Both IR image denoising performance and corresponding influence on subsequent target detection tasks can be evaluated.

    3) The experimental results of both denoising and high-level object detection demonstrate that our GCAN can not only achieve high-performance of denoising compared to other state-of-the-art methods,but also effectively keep the performance of subsequent detection tasks stable under terrible imaging conditions.

    1 Methodology

    1.1 Denoise model

    Assuming that XRm×n is a noise disturbance image and YRm×n is a corresponding clean image,the relationship between them can be formulated as:

    X = δ(Y),

    where δRm×ndenotes the complex degradation process involving internal and external IR imaging conditions.

    The noise reduction process aims to recover the clean images from the degraded images. This process can be transformed to seek a function f to minimize the mse error between fx) and Y ,which can be described as:

    argminf|| f(X)-Y ||2,

    where f is regarded as the optimal approximation of δ-1,and f(X) denotes the recovered clean image.

    1.2 Infrared image denoising network

    1)Overall architecture: In this section,we introduce our infrared image denoising network (GCAN) in detail. First,we follow the encoder decoder-based architecture and combine with residual connections to remove the varied additive noise and initially pass image details to the top layers. It is worth noting that pooling layers and the ReLU layers are removed before the summation with residuals to avoid losing details. Then,we propose a gradient-based channel attention module to maintain the potential target regions (e.g.,high-gradient region) while denoising images. The overall architecture of the GCAN is shown in Fig. 2.

    An illustration of the proposed gradient-aware channel attention network (GCAN) for infrared small target image denoising before detection

    Figure 2.An illustration of the proposed gradient-aware channel attention network (GCAN) for infrared small target image denoising before detection

    2) Encoder-decoder structure: The encoder-decoder structure consists of several stacked Conv-Blocks and Deconv-Blocks. The encoder part is designed to suppress image noise from low-level to high-level step by step while preserving informative information in the input images. As shown in Fig. 2(b),the preprocessed IR image X is first fed into sequential convolutional blocks (Conv-Block Cthth = 1,2,...,N)). After the stacked Conv-Blocks,the image X is transformed into a feature space,and the output of each Conv-Block is a feature mapFCthRCth×H×W(th{1, 2, 3,N}). Then,the data flow through the Deconv-Blocks (Dthth=1,2,...,N)) follows the rule of FILO (First In Last Out). The feature from the last Conv-Block FCNRN×H×W is fed to the first Deconv-Block to generate FD1RC1×H×W. Finally,FG1RC1×H×W and FDN-1RCN-1×H×W are fed into the DN to generate the recovered image FX). The output of Cth can be formulated as:

    FCth=wj*[ReLU(wi*Xth-1+bi)+bj] .

    Each Deconv-Block is symmetric with the corresponding Conv-Block,and the output of Dth can be formulated as:

    FDth=wj'[ReLU(wi'Xth-1+bi')+bj',

    where thth∈1,...,N) is the number of Blocks. wi and bi denote the weights and biases in the i i∈1,...,I) convolutional layer,respectively. * and represent convolution and deconvolution operator,respectively.

    X0 is the input image,and Xkk>0) is the extracted feature from the previous layers. ReLU(X) = Max(0,X) is the activation function.

    3) Residual connections: The residual connection is used to avoid gradients vanishing as the network goes deep,and also serves as a simple detail recovery structure that can connect matched Conv-Blocks and Deconv-Blocks to propagate the informative details from low-level to high-level features. As shown in Fig. 2(c),after the element-wise sum between the feature FD1RC1×H×W and FCN-1RCN-1×H×W,the obtained map is fed into next Deconv-Block D2 to generate the same scale feature map FD2RC2×H×W.

    4) Gradient-based channel attention module (GCAM): To avoid over-smooth the informative small target region,we design a GCAM as shown in Fig. 2(d) to adaptively enhance the informative image channel and enhance the target regions with high gradient. GCAM enhances details by the feature rescaling strategy. Inspired by no-reference image quality metrics,we use average gray to represent the amount of information in the feature map,and average gradient to describe the amount of high- intensity information. GCAM takes the output of first Conv-Block FC1RC1×H×W as input and computes GrayKRC1×H×W and GrayKRC1×H×W for the IK channel of FC1. The Gray operation and the Grad operation are calculated as follows:

    GrayK=i=1Mj=1N(IK(i+1, j)-IK(i, j))2+(I(i, j+1)-I(i, j))22,

    where M and N represent the length and width of the image,respectively. Then GrayK is fed to a mean

    operation to generate AGrayK,respectively. After element-wise multiplication,AAK=AGrayKAGrayK,GCAM can adaptively enhance the input feature map along the channel dimension.

    2 The NUDT-IRSTDn dataset

    2.1 Motivation

    The high-quality dataset is essential for data-driven CNN-based methods. However,existing denoising methods are essentially data-driven and evaluated on their in-house dataset19. Inspired by the single frame infrared small target detection dataset (NUDT-SIRST9),we designed a large-scale infrared image dataset (namely,NUDT-IRSTDn) with different levels of noise to further explore the influence of different levels of noise on high-level tasks (e.g.,target detection).

    These noisy images are manually synthesized by adding Gaussian white noise on those clean long-wave band IR images,whose wavelength locates between 8 μm and 14 μm. As shown in Table 1,three kinds of noise level are chosen (i.e.,σ = 0.05,0.09,and 0.25 for Noise-v1,Noise-v2,and Noise-v3). The original clean images can be regarded as the ground truth. Noise-v3 subset has the highest noise intensity among the three groups.

    Metrics

    NUDT

    SIRST

    NUDT-IRSTDn
    Noise.v1Noise.v2Noise.v3
    LSCR0.402~19.050.402~50.402~3.50.402~2
    LSCR5.684.3643.2051.687
    σ-0~0.060~0.10~0.5
    σ’-0.0130.040.154
    PSNR-21.5~40.220.9~34.19.9~24.4
    PSNR-31.8825.8917.31
    Number1327132713271327

    Table 1. Main characteristics of NUDT-SIRST and NUDT-IRSTDn

    2.2 Implementation details

    To simulate IR images subject to complex noise interference scenarios and better comparison of the influence of different noise intensities on subsequent tasks. We did not directly add the same levels of noise to the initial image. The synthesis process of our dataset is shown in Fig. 4. We first used LSCR as a quantitative metric of detection complexity and set three sets of detection thresholds Tdec(i.e.,5,3.5,and 2). Then,we adopted an adaptive noise level function to adjust noise levels δ and make sure that the LSCR of adding noise IR image is less than Tdec. LSCR is defined as follows:

    LSCR=|μb-μt|σb,

    where μbμtσb are the local background gray mean,target gray level mean,and local background gray standard deviation. We set the local background of the target as a rectangle centered at the target position with fixed width and height of 20 pixels. To eliminate the influence of the target region,we exclude the target region inside the rectangle. Some examples of the developed dataset are shown in Fig. 3.

    Examples of the developed dataset,including (a0)-(i0) clean images; (a1)-(i1) level-1 noisy images; (a2)-(i2) level-2 noisy images; (a3)-(i3) level-3 noisy images

    Figure 3.Examples of the developed dataset,including (a0)-(i0) clean images; (a1)-(i1) level-1 noisy images; (a2)-(i2) level-2 noisy images; (a3)-(i3) level-3 noisy images

    Synthesis process of our dataset

    Figure 4.Synthesis process of our dataset

    As shown in Table 1,compared with the original noise-free NUDT-SIRST dataset,our developed NUDT-IRSTDn dataset provides much more number of images (i.e.,3981 vs 1327) under varied LSCR value. The LSCR value of NUDT-IRSTDn locates in 0.402-5,0.402-3.5,and 0.402-2 for Noise v1,Noise v2,and Noise v3,which are much smaller than that of NUDT-SIRST. Moreover,the average LSCR values (i.e.,LSCR’) of NUDT-IRSTDn are 4.36,3.20,and 1.68 for NUDT-IRSTDn with Noise v1,Noise v2,and Noise v3,respectively. More visually non-salient targets introduce huge difficulty for precise detection.

    3 Experiments

    3.1 Experiment setting

    1)Implementation Details: We conducted extensive experiments on the NUDT-IRSTDn dataset. To consist with the NUDT-SIRST dataset,we divided each group dataset into a training set and a test set with the ratio of 1:1. We resized all input IR images to 256×256 pixels. The batch size and learning rate in the process of network training were set as 8 and 1×e-5 respectively. We used the mean square error (MSE) as the loss function of our network. All models were implemented in PyTorch on a computer with an Intel Xeon Gold 5117 CPU and an Nvidia Tesla V100 GPU.

    2)Evaluation Metrics: Following the previous works1015,we used PSNR and SSIM to evaluate the recovery image quality. We also adopted detection metrics (intersection over union (IoU),probability of detection (Pd) and false-alarm (Fa)) to evaluate the practical performance of denoising methods.

    3.2 Experimental results and analysis

    1)Denoising results: To verify the superiority of our method,we compared our GCAN with state-of-the-art methods,including conventional model-based methods (BM3D10,WNNM13,and K-SVD11) and CNN-based methods (REDCNN16 and DnCNN15) on the NUDT-IRSTDn dataset. The proposed method and comparative methods are evaluated on the test set of the three subsets (i.e.,Noise-v1,Noise-v2 and Noise-v3) of NUDT-IRSTDn. The results of PSNR and SSIM are presented in Table II. We can observe that our GCAN generates higher performance than the comparative three model-based methods and two learning-based methods in term of PSNR. Compared with DnCNN,GCAN has a much better denoising ability as shown in Table 2,our GCAN achieves much higher PSNR (i.e.,45.5 vs 44.3,42.1 vs 40.3,and 33.7 vs 33.6 dB) than the DnCNN. It’s worth noting that,1 dB improvement of PSNR is high enough for the denoising task. It demonstrates that the superiority of our method to recover clean images. Meanwhile,the higher SSIM index also proves that our method has a stronger ability to recover accurate details and distinguish fine structure information from complex noise. The qualitative results are shown in Fig. 5. The zoomed images clearly show the regions of interest. It can be observed that GCAN suppresses different levels of noise and preserves the details of the target better. Compared to GCAN w/o GCAM,as shown in Table 3,our GCAN achieves 0.9 dB performance increase (45.5 vs 44.6) in term of PSNR under NUDT-IRSTDn-v1 subset. That is because,our GCAM can adaptively enhance the input feature map along the channel dimension. More informative channel-dimension feature maps are enhanced,introducing better denoising results.

    Denosing

    Method

    Noise.v1Noise.v2Noise.v3

    PS

    NR

    SSIM

    PS

    NR

    SSIM

    PS

    NR

    SSIM
    BM3D36.70.7531.00.5219.40.23
    WNNM34.60.3833.10.3630.30.28
    K-SVD35.20.6234.00.4331.20.27
    REDCNN36.80.8735.60.8229.50.74
    DnCNN44.30.9340.30.9133.60.87
    SWINIR44.90.9241.70.9734.30.87
    GCAN45.50.9642.10.9633.70.88

    Table 2. PSNR and SSIM values achieved by different denoising methods under varied noise-level dataset

    Method

    #Params

    (dB)

    FLOPs

    (G)

    PSNR/SSIM

    (dB)

    GCAN

    w/o GCAM

    1.84883.8944.6
    GCAN2.345157.3045.5

    Table 3. Ablation study on our proposed GCAM module

    2)Effectiveness of Denoising for Detection: In this subsection,we evaluated the effectiveness of the denoising methods by comparing whether these methods can help the subsequent detection task maintain performance under a varied noisy environment.

    Firstly,we evaluated the influence of additive noise on subsequent target detection. We selected five typical infrared small target detection methods (Top-hat17,RIPT18,ACM19,UNet20,and DNANet9) to detect targets from the original image dataset and the corresponding three noise-level image datasets. The quantitative detection results on the four datasets are listed in Table 5. It can be observed that with the increase of noise intensity of the datasets (i.e.,Oriset,Noise-v1,Noise-v2 and Noise-v3),the IoU value of the above five detection methods all gradually decreases. For example,after image denoising,the detection method (i.e.,DNANet) achieves much better results (i.e.,1.6%,1.6%,and 8.1×10-5 higher performance than DnCNN in term of IoU,Pd and FA on Noise-v1 subset). It is important for the infrared small target detection task under varied conditions of the imaging device and external environment.

    Detection Method*OrisetNoise.v1Noise.v2Noise.v3
    Top-Hat1725.823.613.05.21
    RIPT1835.226.314.97.75
    ACM1944.139.120.71.19
    UNet2079.564.738.419.0
    DNANet988.664.638.35.5

    Table 5. IoU(×10-2) values achieved by different detection methods under varied noise-level dataset

    Then,we compared the detection results on denoised images to evaluate the performance of denoising methods. We adopted Top-Hat17 and DNA-Net9 as the representatives of traditional and deep learning SIRST detection methods,respectively. As shown in Table 3,the improvements achieved by our GCAN over other denoising methods are obvious. It demonstrates that our GCAN achieves better performance on removing noise and retaining important details at different noise levels. Note that,the detection results on the denoised images with the WNNM method are even worse after denoising because of the over-smoothing of the target regions. Therefore,the denoising method for IR small target images needs to remove the noise while effectively retaining the details of the target region in the IR image,thus alleviating the degradation of detection performance under complex noise conditions.

    Image Infomation Is Not Enable
    Denoising MethodNoise.v1Noise.v2Noise.v3
    Top-Hat17DNANet9Top-Hat17DNANet9Top-Hat17DNANet9
    BM3D1023.6/37.5/1.961.1/72.1/17.713.2/27.4/3.0439.4/49.3/32.95.42/21.3/1285.25/30.8/18.0
    WNNM131.89/6.55/14.51.75/1.58/1.132.11/7.07/21.552.07/1.90/0.951.13/3.91/7.820.75/0.63/0.70
    K-SVD1121.1/26.3/12.358.9/67.3/28.113.3/26.2/45.142.1/51.2/52.05.14/18.5/86.72.12/32.5/29.1
    RED-CNN1613.2/26.9/39.444.5/58.1/1.915.33/14.8/3.2528.1/28.8/3.921.67/6.61/3.763.57/10.2/10.0
    DnCNN1523.9/39.4/2.0572.9/95.1/1.2121.1/35.4/1.9660.4/86.2/1.306.29/18.3/2.7515.2/26.2/5.43
    GCAN(ours)24.1/41.7/1.4874.5/96.7/0.4022.0/38.4/1.7061.6/87.9/1.008.38/20.2/2.6117.5/29.2/1.07

    Table 4. IoU(×10-2), Pd(×10-2) and Fa(×10-4) values achieved by detection methods after pre-processing with noise reduction methods under varied noise-level dataset

    3) Computational Efficiency: As shown in Table 6,GFLOPs,inference time (s),parameters,and PSNR performance of our GCAN are 157.30 GFLOPs,0.206 s,2.345 M,and 45.5 dB,respectively. Compared to three benchmark deep learning-based methods,our method achieves much better denoising performance in term of PSNR but introduces larger model size,longer inference time,and extra computation cost (i.e.,FLOPs). It may introduce inference delay under computational resources limited scenes,but is still affordable for the GPU-available scene.

    Denosing

    Method

    GFLOPs

    (G)

    Inference

    Time (s)

    Params

    (M)

    PSNR(dB)
    RED-CNN 1683.890.1561.84844.6
    DnCNN 1543.790.3070.66844.3
    SWINIR 2149.640.27111.8044.9
    GCAN157.300.2062.34545.5

    Table 6. GFLOPs, Inference Time (s), Parameters, and PSNR performance of different denoising methods

    4 Conclusion

    In this paper,we propose a simple yet effective gradient-aware channel attention network (GCAN) for infrared small target image denoising before detection. To achieve this data-driven learning manner,we develop an infrared image denoising dataset,which contains 3 noise-level subsets. Then,we propose a novel infrared image denoising method (namely,GCAN) to achieve high-performance image denoising. Specifically,an encoder decoder-based denoising network is used to initially remove the additive noise. Then,a residual connection structure and a gradient-based channel attention module (GCAM) are designed to preserve informative image details in IR images. Some conclusions can be summarized as follows:

    (1) Compared to four benchmark denoising methods,GCAN achieves better denoising performance in terms of PSNR and SSIM. Better visually denoising performance is also achieved.

    (2) The gradient-based channel attention module (GCAM) can avoid the over-smooth of IR images and effectively maintain the response of small target regions. Extensive experiments on five benchmark detection methods can verify the effectiveness of our method in terms of IoU、Pd and Fa.

    (3) Although achieving better performance,larger model size and extra computation cost (i.e.,FLOPs) are introduced,more light-weight computation operator and simple network will be explored to increase the practicality under computational resources limited device in the future work.

    References

    [1] Y Sun, J Yang, W An. Infrared dim and small target detection via multiple subspace learning and spatial-temporal patch-tensor model. IEEE Transactions on Geoscience and Remote Sensing, 59, 3737-3752(2020).

    [2] T Wu, B Li, Y Luo. MTU-Net: Multilevel TransUNet for Space-Based Infrared Tiny Ship Detection. IEEE Transactions on Geoscience and Remote Sensing, 61, 1-15(2023).

    [3] B Li, Y Wang, L Wang et al. Monte Carlo Linear Clustering with Single-Point Supervision is Enough for Infrared Small Target Detection, 455-468(2023).

    [4] T Liu, J Yang, B Li et al. Nonconvex tensor low-rank approximation for infrared small target detection. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-18(2021).

    [5] B Li, Y Guo, J Yang et al. Gated recurrent multi-attention network for VHR remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-13(2021).

    [6] T Liu, J Yang, B Li et al. Infrared small target detection via nonconvex tensor tucker decomposition with factor prior. IEEE Transactions on Geoscience and Remote Sensing, 62, 25-38(2023).

    [7] J Zhou, L Wang, B Liu. Analysis of the causes of non-uniformity in infrared images. Infrared and Laser Engineering, 26, 11-13(1997).

    [8] . B Goyal, . A Dogra, . S Agrawal et al. A. Image enoising review: From classical to state-of-the-art approaches. Information fusion, 55, 220-244(2020).

    [9] B Li, C Xiao, L Wang et al. Dense nested attention network for infrared small target detection. IEEE Transactions on image processing, 32, 1745-1758(2023).

    [10] K Dabov, A Foi, V Katkovnik et al. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16, 2080-2095(2007).

    [11] M Aharon, M Elad, A Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54, 4311-4322(2006).

    [12] P Bouboulis. Slavakis K., Theodoridis S. Adaptive kernel based image denoising employing semi-parametric regularization. IEEE Transactions on Image Processing, 19, 1465-1479(2010).

    [13] S Gu, Q Xie, D Meng et al. Weighted nuclear norm minimization and its applications to low level vision. International journal of computer vision, 121, 183-208(2017).

    [14] V Jain, H S Seung. Natural Image Denoising with Convolutional Networks, 455-468(2008).

    [15] K Zhang, W Zuo, Y Chen et al. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26, 3142-3155(2017).

    [16] H Chen, Y Zhang, W Zhang et al. Low-dose ct via convolutional neural network. Biomedical optics express, 8, 679-694(2017).

    [17] J.-F Rivest, R Fortin. Detection of dim targets in digital infrared imagery by morphological image processing. Optical Engineering, 35, 1886-1893(1996).

    [18] N Chirdchoo, W S Soh, K C Chua. Ript: A receiver-initiated reservation-based protocol for underwater acoustic networks. IEEE Journal on Selected Areas in Communications, 26, 1744-1753(2008).

    [19] Y Dai, Y Wu, F Zhou et al. Asymmetric contextual modulation for infrared small target detection, 950-959(2021).

    [20] O Ronneberger, P Fischer, T Brox. U-net: Convolutional networks for biomedical image segmentation, 234-241(2015).

    [21] J Liang, J Cao, G Sun et al. Swinir: Image restoration using swin transformer, 1833-1844(2021).

    Zai-Ping LIN, Yi-Hang LUO, Bo-Yang LI, Qiang LING, Qing ZHENG, Jing-Yi YANG, Li LIU, Jing WU. Gradient-aware channel attention network for infrared small target image denoising before detection[J]. Journal of Infrared and Millimeter Waves, 2024, 43(2): 254
    Download Citation