
- Journal of Infrared and Millimeter Waves
- Vol. 43, Issue 4, 582 (2024)
Abstract
Introduction
The application range of unmanned aerial vehicles (UAVs) is constantly expanding,encompassing areas such as military reconnaissance,outdoor photography,power line inspection and other fields. However,at the same time,it has also given rise to a series of social issues. These include concerns about privacy infringement due to UAVs being used for illicit filming and the potential threat to national security posed by the military application of UAVs. Therefore,the research of anti-UAV technology has important practical significance. Infrared UAV target detection technology is a technique that uses infrared imaging to continuously monitor UAVs. It enables UAV target detection based on infrared radiation,and also has obvious advantages in low-light conditions[
However,infrared UAV target detection faces challenges. The distance between the UAV and the sensor makes small infrared targets lack distinctive texture and shape features,hampering the detection. Additionally,background clutter and noise,like clouds and buildings,can cause confusion with obstacles[
Figure 1.Infrared small UAVs
In recent years,there has been a continuous emergence of object detection methods based on deep learning,which has achieved impressive detection performance. These methods can be categorized into two types based on how they handle input images. The first type is the two-stage detection method,such as the region-based R-CNN and its variants[
This paper proposes a UAV detection method based on an improved YOLOv5s model to address the challenges of detecting small targets. The original YOLOv5 structure only includes three feature detection heads,which are not effective in extracting the feature information of small target UAVs captured by infrared cameras at long distances. To address this issue,this paper adds a feature detection head suitable for detecting small targets by YOLOv5. Additionally,the Intersection over Union (IoU) in the original YOLOv5 model is not a good metric for small target detection tasks. Therefore,this paper replaces it with a more suitable metric for small targets called the Normalized Gaussian Wasserstein Distance (NWD) [
1 Method to improve YOLOv5
YOLOv5 can be categorized into five architectures based on the depth and width of the model:YOLOv5n,YOLOv5s,YOLOv5m,YOLOv5l,and YOLOv5x. Among these,to balance detection speed and accuracy,we choose to improve YOLOv5s. The YOLOv5 network structure consists of three main components. CSP-Darknet53 serves as the backbone feature extraction network,extracting features from the input image. CSP-Darknet53 is an improved version of Darknet53 from YOLOv3,utilizing the Cross-Stage Partial (CSP) network strategy to reduce parameters and computation,thus enhancing the inference speed. In the middle part,YOLOv5 combines two modules:SPPF and PANet [
1.1 Small object detection head
The YOLOv5 model uses a backbone network that undergoes five downsampling stages,producing five feature maps (P1-P5) with resolutions of 1/2,1/4,1/8,1/16,and 1/32 of the input image size,respectively. The neck network combines multi-scale features in a top-down and bottom-up manner without changing the feature map sizes. The detection head operates on the P3-P5 feature maps for object detection. This design is based on the relationship between the feature layer size and the receptive field size in YOLOv5. The receptive field refers to the size of the input image region corresponding to each output unit in a convolutional neural network. A larger receptive field captures more object features,making it suitable for detecting larger objects. On the contrary,a smaller receptive field can only capture a limited number of object features,making it suitable for detecting smaller objects. A smaller receptive field implies that each pixel in the feature map is influenced by a smaller region in the original image. This enables more precise localization of object positions and boundaries,unaffected by irrelevant regions. Additionally,a smaller receptive field corresponds to a larger feature map size,preserving more spatial information and avoiding the loss of fine details on small objects. Therefore,a smaller receptive field is better suited for detecting smaller objects.
We have added a new detection head for small objects after the P2 feature layer in the YOLOv5 model. The detection head operates at a resolution of 160×160 pixels in the P2 layer,which corresponds to two downsampling operations in the backbone network. Each pixel in the P2 layer has a receptive field of 10×10 pixels,which is the smallest receptive field among the P2-P5 feature extraction layers. Additionally,we have assigned different loss function weights to the P2-P5 feature layers based on the target sizes. Specifically,we have assigned a weight of 4 to the feature layers for P2 and P3,a weight of 1 to the feature layer for P4,and a weight of 0.4 to the feature layer for P5. The purpose of this weighting scheme is to enhance the focus on small and tiny objects while reducing overfitting to large objects. The weighted formula for the object loss function is shown in
Figure 2.Improved YOLOv5s network architecture
Figure 3.Small target detection head
1.2 Normalized Gaussian Wasserstein Distance
In YOLOv5,IoU is used as an indicator to measure the degree of matching between predicted bounding boxes and real bounding boxes. It is obtained by calculating the ratio of the intersection area and the union area of the two. However,in the UAV images obtained by infrared devices,the overlapping part of the bounding boxes of small targets is often very small,which will result in lower IoU values. As shown in
Figure 4.The sensitivity analysis of IoU on infrared small UAV
We used a metric known as Normalized Gaussian Wasserstein Distance named NWD,which is suitable for small object detection. It is insensitive to the scale of targets,allowing for better assessment of similarities between small objects. Specifically,this method models the object bounding boxes as two-dimensional Gaussian distributions and calculates the NWD between the predicted and the ground truth distributions,as shown in
where NA and NB are Gaussian distributions modeled by A=(cxA,cyA,wA, hA) and B=(cxB,cyB,wB,hB),
2 Experiment and deployments
2.1 Dataset introduction
This article is based on the experimental analysis of selected subsets from the 3rd Anti-UAV Workshop & Challenge[
Figure 5.Dataset analysis:(a) the label category distribution; (b) the bounding box size distribution; (c) the label center position distribution; (d) the label size distribution
2.2 Evaluation index
To verify the performance of the model,this article selects Average Precision (AP) to evaluate the model performance,where the AP@0.5 and the AP@0.5:0.95 represent the detection accuracy of two models under different IoU thresholds. The AP@0.5 is the average accuracy of a certain category when IoU is 0.5. The average accuracy is based on different confidence levels,including the curve area of Precision (P) and Recall (R). And AP@0.5:0.95 is the average accuracy of a certain category when IoU is taken every 0.05 from 0.5 to 0.95. This indicator requires a higher degree of overlap in the target box. The False Alarm Rate (FAR) is depicted in
where TP means true positive,TN means true negative,FP means false positive,and FN means false negative.
2.3 Experimental environment and parameter settings
In this work,all experiments were conducted on the Ubuntu 22.04 operating system,with 128 GB of RAM and an Intel i9-13900K processor. The system was equipped with an NVIDIA RTX 3090 Ti graphics card with 24 GB of video memory; the deep learning framework used was Pytorch 1.12.1; and the programming language was Python 3.10.
The optimization algorithm used for model training was Stochastic Gradient Descent(SGD). The initial learning rate was 0.01,the momentum was 0.937,and the weight decay coefficient was 0.0005. In addition,the model was trained for 300 epochs,the batch size of the dataset was set to 64.
2.4 Experimental results
The experimental results regarding the allocation of different loss function weights for different feature layers are shown in
Weighting Coefficients | AP@0.5 (%) | AP@0.5:0.95 (%) | FAR (%) | MR (%) |
---|---|---|---|---|
1·LP2+1·LP3+1·LP4+0.4·LP5 | 81.2 | 44.9 | 6.4 | 28.4 |
1·LP2+4·LP3+1·LP4+0.4·LP5 | 86.8 | 47.6 | 4.7 | 19.4 |
4·LP2+1·LP3+1·LP4+0.4·LP5 | 84.2 | 46.0 | 7.8 | 22.8 |
4·LP2+4·LP3+1·LP4+0.4·LP5 | 88.4 | 48.6 | 4.0 | 17.4 |
Table 1. Comparison of the different weighting coefficient results
Models | AP@0.5(%) | AP@0.5:0.95 (%) | FAR (%) | MR (%) |
---|---|---|---|---|
YOLOv5s | 84.7 | 46.1 | 3.1 | 23.6 |
YOLOv5s+0.5NWD | 87.4 | 47.9 | 4.0 | 17.4 |
YOLOv5s+NWD | 89.9 | 48.1 | 4.4 | 14.6 |
YOLOv5s+P2 | 88.4 | 48.6 | 4.0 | 17.4 |
YOLOv5s+NWD+P2 | 91.9 | 50.0 | 4.2 | 12.9 |
Table 2. Comparison of ablation experiments of improved methods
The performance comparison chart of AP is shown in
Figure 6.Performance comparison of the AP:(a) AP@0.5; (b) AP@0.5:0.95
Figure 7.Some examples of the detection result on the improved model
Models | AP@0.5 (%) | AP@0.5:0.95 (%) | FAR (%) | MR (%) | Parameter (M) | GFLOPs | Speed (FPS) | Weights (MB) |
---|---|---|---|---|---|---|---|---|
SSD-ResNet50 | 60.4 | 22.8 | 29.7 | 46.2 | 13.1 | 15.0 | 200 | 105.1 |
Faster-RCNN-ResNet50 | 78.3 | 30.9 | 21.2 | 40.0 | 41.1 | 134.5 | 50 | 330.3 |
RetinaNet-ResNet50 | 82.1 | 33.8 | 18.5 | 33.2 | 32.0 | 127.5 | 43 | 257.3 |
YOLOv3 | 83.3 | 45.2 | 8.7 | 22.4 | 9.3 | 23.1 | 526 | 18.9 |
YOLOv5s | 84.7 | 46.1 | 3.1 | 23.6 | 7.0 | 15.8 | 625 | 14.4 |
YOLOv5m | 86.6 | 48.8 | 3.1 | 20.4 | 20.8 | 47.9 | 303 | 42.2 |
YOLOv5l | 87.7 | 49.1 | 3.8 | 17.6 | 46.1 | 107.6 | 196 | 92.8 |
YOLOv8s | 89.5 | 48.9 | 6.3 | 17.3 | 11.1 | 28.4 | 435 | 22.5 |
YOLOv5s+NWD+P2 | 91.9 | 50.0 | 4.2 | 12.9 | 7.7 | 26.8 | 400 | 16.3 |
Table 3. Comparison of improved YOLOv5s with other methods
2.5 Deployment
Our algorithm is deployed on the BM1684X TPU,and the specific flow is shown in
Figure 8.Framework of the deployment
BM1684X | FPS | AP@0.5 (%) | AP@0.5:0.95(%) |
---|---|---|---|
FP32 | 12 | 91.9 | 50.0 |
FP16 | 95 | 87.8 | 49.8 |
INT8 | 163 | - | - |
Table 4. Result of the deployment
3 Conclusion
To address the challenge of small drone detection in infrared devices,this paper proposes a light weight detection model. The model introduces a small object feature extraction layer at the P2 level of the backbone network,connecting it to a high-resolution detection head,thereby enhancing the network's capability to perceive small objects. Additionally,the paper adopts the NWD metric to replace the original IoU-based metric,as the NWD metric provides better measurements for small object instances and improves the model's detection accuracy. The paper conducts several comparative experiments on the partial sub-dataset provided by the 3rd Anti-UAV Workshop & Challenge. The results demonstrate that the proposed model outperforms other mainstream detection models in terms of both AP@0.5 and AP@0.5:0.95 evaluation metrics,validating the effectiveness of the proposed approach. Furthermore,the proposed method maintains high performance levels concerning parameter count,computational complexity,and model weight file size.
References
[5] W Liu, D Anguelov, D Erhan et al. Ssd: Single shot multibox detector, 21, 14(2016).
[6] J Redmon, S Divvala, R Girshick et al. You Only Look Once: Unified, Real-Time Object Detection, 779(2016).
[8] J Wang, C Xu, W Yang et al. A normalized Gaussian Wasserstein distance for tiny object detection. arXiv preprint(2021).
[9] S Liu, L Qi, H Qin et al. Path Aggregation Network for Instance Segmentation, 8759(2018).
[11] J Zhao, J Li, L Jin et al. The 3rd anti-uav workshop & challenge: Methods and results. arXiv preprint(2023).

Set citation alerts for the article
Please enter your email address