• Laser Journal
  • Vol. 45, Issue 7, 91 (2024)
CHENG Huanxin, XU Haotian, and LUO Xiaoling*
Author Affiliations
  • Qingdao University of Science and Technology, Qingdao Shandong 266061, China
  • show less
    DOI: 10.14016/j.cnki.jgzz.2024.07.091 Cite this Article
    CHENG Huanxin, XU Haotian, LUO Xiaoling. Automatic driving target detection method based on improved YOLOv7[J]. Laser Journal, 2024, 45(7): 91 Copy Citation Text show less
    References

    [1] Zhang T, Zhang X, Ke X, et al. HOG-ShipCLSNet: A novel deep learning network with hog feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 1-22.

    [2] Zha H, Miao Y, Wang T, et al. Improving unmanned aerial vehicle remote sensing-based rice nitrogen nutrition index prediction with machine learning[J]. Remote sensing, 2020, 12(2): 215.

    [3] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587.

    [4] Girshick R. Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1440-1448.

    [5] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149.

    [6] Bharati P, Pramanik A. Deep Learning Techniques—RCNN to Mask R-CNN: A Survey[J]. In Computational Intelligence in Pattern Recognition, 2020, 657-668.

    [7] Meng R, Rice S G, Wang J, et al. A Fusion Steganographic Algorithm Based on Faster R-CNN[J]. Comput. Mater. Contin, 2018(4): 16.

    [8] Xia L I, Xu Z, Shen X, et al. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN[J]. Current oncology (Toronto, Ont.), 2021, 28(5): 3585-3601.

    [10] Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14. Springer International Publishing, 2016: 21-37.

    [11] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788.

    [12] Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263-7271.

    [13] Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018.

    [14] Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020.

    [15] Albayrak A, zerdem M S. Gas Cylinder Detection Using Deep Learning Based YOLOv5 Object Detection Method[C]//2022 7th International Conference on Computer Science and Engineering (UBMK). IEEE, 2022: 434-437.

    [18] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for realtime object detectors[J]. arXiv preprint arXiv: 2207.02696, 2022.

    [19] Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF international conference on computer vision.2021: 10012-10022.

    [20] Pan X, Ge C, Lu R, et al. On the integration of self-attention and convolution[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.2022: 815-825.

    [21] Bodla N, Singh B, Chellappa R, et al. Soft-NMS-improving object detection with one line of code[C]//Proceedings of the IEEE international conference on computer vision.2017: 5561-5569.

    [22] Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The kitti dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.

    CHENG Huanxin, XU Haotian, LUO Xiaoling. Automatic driving target detection method based on improved YOLOv7[J]. Laser Journal, 2024, 45(7): 91
    Download Citation