• Optics and Precision Engineering
  • Vol. 31, Issue 20, 3021 (2023)
Dandan HUANG1, Han GAO1, Zhi LIU1,2,*, Lintao YU1, and Huiji WANG1
Author Affiliations
  • 1School of Electronics and In formation Engineering, Changchun University of Science and Technology, Changchun30022, China
  • 2National and Local Joint Engineering Research Center of Space Photoelectric Technology, Changchun University of Science and Technology, Changchun1300, China
  • show less
    DOI: 10.37188/OPE.20233120.3021 Cite this Article
    Dandan HUANG, Han GAO, Zhi LIU, Lintao YU, Huiji WANG. Lightweight target detection network for UAV platforms[J]. Optics and Precision Engineering, 2023, 31(20): 3021 Copy Citation Text show less
    References

    [1] X WU, W LI, D F HONG et al. Deep learning for unmanned aerial vehicle-based object detection and tracking: a survey. IEEE Geoscience and Remote Sensing Magazine, 10, 91-124(2022).

    [2] S SRIVASTAVA, S NARAYAN, S MITTAL. A survey of deep learning techniques for vehicle detection from UAV images. Journal of Systems Architecture, 117, 102152(2021).

    [3] 范丽丽, 赵宏伟, 赵浩宇, 等. 基于深度卷积神经网络的目标检测研究综述[J]. 光学 精密工程, 2020, 28(5): 1152-1164.FANL L, ZHAOH W, ZHAOH Y, et al. Survey of target detection based on deep convolutional neural networks[J]. Opt. Precision Eng., 2020, 28(5): 1152-1164.(in Chinese)

    [4] A KRIZHEVSKY, I SUTSKEVER, G E HINTON. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84-90(2017).

    [5] 杜达宽, 孙剑峰, 丁源雪, 等. 基于GM-APD激光雷达数据融合的小目标检测[J]. 光学 精密工程, 2023, 31(3): 393-403. doi: 10.37188/OPE.20233103.0393DUD K, SUNJ F, DINGY X, et al. Small object detection based on GM-APD lidar data fusion[J]. Opt. Precision Eng., 2023, 31(3): 393-403.(in Chinese). doi: 10.37188/OPE.20233103.0393

    [6] 江波, 屈若锟, 李彦冬, 等. 基于深度学习的无人机航拍目标检测研究综述[J]. 航空学报, 2021, 42(4): 131-145.JIANGB, QUR K, LIY D, et al. Object detection in UAV imagery based on deep learning: review[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(4): 131-145. (in Chinese)

    [7] 张丽丽, 陈真, 刘雨轩, 等. 基于ZYNQ的Yolo v3-SPP实时目标检测系统[J]. 光学 精密工程, 2023, 31(4): 543-551. doi: 10.37188/ope.20233104.0543ZHANGL L, CHENZ, LIUY X, et al. Yolo v3-SPP real-time target detection system based on ZYNQ[J]. Opt. Precision Eng., 2023, 31(4): 543-551.(in Chinese). doi: 10.37188/ope.20233104.0543

    [8] Y Q GONG, X H YU, Y DING et al. Effective Fusion Factor in FPN for Tiny Object Detection, 1159-1167(3).

    [9] T Y LIN, P DOLLÁR, R GIRSHICK et al. Feature pyramid networks for object detection, 936-944(21).

    [10] H LI, P XIONG, J AN et al. Pyramid attention network for semantic segmentation. arXiv preprint(2018).

    [11] C Y CHEN, M Y LIU, O TUZEL et al. R-CNN for Small Object Detection. Computer Vision - ACCV 2016, 214-230(2017).

    [12] J N LI, X D LIANG, Y C WEI et al. Perceptual Generative Adversarial Networks for Small Object Detection, 1951-1959(21).

    [13] L DENG, G Q LI, S HAN et al. Model compression and hardware acceleration for neural networks: a comprehensive survey. Proceedings of the IEEE, 108, 485-532(2020).

    [14] C YANG, Z H HUANG, N Y WANG. QueryDet: cascaded sparse query for accelerating high-resolution small object detection, 13658-13667(2022).

    [15] 黄海生, 饶雪峰. 面向无人机航拍场景的轻量化目标检测[J]. 计算机系统应用, 2022, 31(12): 159-168.HUANGH S, RAOX F. Lightweight object detection for drone-captured scenarios[J]. Computer Systems & Applications, 2022, 31(12): 159-168.(in Chinese)

    [16] X K ZHU, S C LYU, X WANG et al. TPH-YOLOv5: improved yolov5 based on transformer prediction head for object detection on drone-captured scenarios, 2778-2788(11).

    [17] J WANG, C XU, W YANG et al. A normalized Gaussian Wasserstein Distance for Tiny Object Detection. arXiv, 2110-13389(2021). https://arxiv.org/abs/2110.13389.pdf

    [18] H REZATOFIGHI, N TSOI, J GWAK et al. Generalized intersection over union: a metric and a loss for bounding box regression, 658-666(15).

    [19] Z H ZHENG, P WANG, W LIU et al. Distance-IoU loss: faster and better learning for bounding box regression. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 12993-13000(2020).

    [20] S H BAE. Object detection based on region decomposition and assembly. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 8094-8101(2019).

    [21] J CHEN, S H KAO, H HE et al. Run, Don’t Walk: Chasing Higher FLOPS For Faster Neural Networks. arXiv, 2303-03667(2023). https://arxiv.org/abs/2303.03667.pdf

    [22] J REDMON, S DIVVALA, R GIRSHICK et al. You only look once: unified, real-time object detection, 779-788(27).

    [23] J REDMON, A FARHADI. YOLOv3: An Incremental Improvement. arXiv, 1804-02767(2018). https://arxiv.org/abs/1804.02767.pdf

    [24] 马立, 巩笑天, 欧阳航空. Tiny YOLOV3目标检测改进[J]. 光学 精密工程, 2020, 28(4)988-995MAL, GONGX T, OUYANGH K. Improvement of Tiny YOLOV3 target detection[J]. Opt. Precision Eng., 2020, 28(4)988-995. (in Chinese)

    [25] A BOCHKOVSKIY, C Y WANG, H Y M LIAO. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv, 2004-10934(2020). https://arxiv.org/abs/2004.10934.pdf

    [26] 李想, 特日根, 仪锋, 等. 针对全球储油罐检测的TCS-YOLO模型[J]. 光学 精密工程, 2023, 31(2): 246-262. doi: 10.37188/OPE.20233102.0246LIX, TER G, YIF, et al. TCS-YOLO model for global oil storage tank inspection[J]. Opt. Precision Eng., 2023, 31(2): 246-262.(in Chinese). doi: 10.37188/OPE.20233102.0246

    [27] D W DU, P F ZHU, L Y WEN et al. VisDrone-DET2019: the vision meets drone object detection in image challenge results, 213-226(27).

    [28] K HAN, Y H WANG, Q TIAN et al. GhostNet: more features from cheap operations, 1577-1586(13).

    [29] Z LI, C PENG, G YU et al. Light-head R-CNN: In Defense of Two-Stage Object Detector. arXiv, 1711-07264(2017). https://arxiv.org/abs/1711.07264.pdf

    [30] Z W CAI, N VASCONCELOS. Cascade R-CNN: delving into high quality object detection, 6154-6162(18).

    [31] T Y LIN, P GOYAL, R GIRSHICK et al. Focal loss for dense object detection, 2999-3007(22).

    Dandan HUANG, Han GAO, Zhi LIU, Lintao YU, Huiji WANG. Lightweight target detection network for UAV platforms[J]. Optics and Precision Engineering, 2023, 31(20): 3021
    Download Citation