• Optics and Precision Engineering
  • Vol. 32, Issue 10, 1595 (2024)
Renxiang CHEN1,*, Tianran QIU1, Lixia YANG2, Tenwei YU1..., Fei JIA1 and Cai CHEN3|Show fewer author(s)
Author Affiliations
  • 1Chongqing Engineering Laboratory of Traffic Engineering Application Robot, Chongqing Jiaotong University,Chongqing400074, China
  • 2School of Business Administration, Chongqing University of Science and Technology, Chongqing401331, China
  • 3Chongqing Intelligent Robot Research Institute, Chongqing 4000714, China
  • show less
    DOI: 10.37188/OPE.20243210.1595 Cite this Article
    Renxiang CHEN, Tianran QIU, Lixia YANG, Tenwei YU, Fei JIA, Cai CHEN. A method for dense occlusion target recognition of service robots based on improved YOLOv7[J]. Optics and Precision Engineering, 2024, 32(10): 1595 Copy Citation Text show less
    Improve YOLOv7 structure
    Fig. 1. Improve YOLOv7 structure
    Depthwise Over-parameterized module
    Fig. 2. Depthwise Over-parameterized module
    Coordinate attention module
    Fig. 3. Coordinate attention module
    Overall flowchart of Recognition
    Fig. 4. Overall flowchart of Recognition
    Model recognition visualization results
    Fig. 5. Model recognition visualization results
    No dense occlusion to recognize visualization results
    Fig. 6. No dense occlusion to recognize visualization results
    Service robot experimental platform
    Fig. 7. Service robot experimental platform
    Comparison of actual recognition effects
    Fig. 8. Comparison of actual recognition effects
    Dense occlusion images from MessyTable dataset
    Fig. 9. Dense occlusion images from MessyTable dataset
    无密集遮挡样本个数密集遮挡样本个数
    训练集1 1882 264
    验证集108230
    测试集100166
    Table 1. Dense occlusion of dataset
    模型P/%R/%mAP/%模型体积/MBFPS/(frame·s-1
    YOLOv787.982.888.87231.3
    改进186.582.891.17429.5
    改进286.588.391.47230.2
    改进382.185.389.24838.0
    改进1+293.086.092.57428.6
    改进1+393.986.491.65036.4
    改进2+390.084.091.54837.0
    Im-YOLOv792.888.792.95035.8
    Table 2. Results of ablation experiment
    测试数据方法P/%R/%mAP/%
    无遮挡场景YOLOv795.792.495.6
    无遮挡场景Im-YOLOv792.295.097.5
    Table 3. No dense occlusion test data
    模型P/%R/%mAP/%模型体积/MBFPS/(frame·s-1
    DETR580.175.485.64738.9
    YOLOv5-s85.378.086.31432.2
    YOLOv5改进2190.276.086.73628.6
    YOLOv5-l90.481.288.59030.2
    YOLOv4-tiny-x889.182.988.68234.4
    YOLOv787.982.888.87231.3
    YOLOv8-l92.378.489.68434.1
    Im-YOLOv792.888.792.95035.8
    Table 4. Self built dataset dataset comparison experiment results
    模型P/%R/%mAP/%模型体积/MB
    DETR583.474.980.6473
    YOLOv5-s85.772.481.214
    YOLOv5改进2186.276.682.236
    YOLOv5-l90.074.883.090
    YOLOv4-tiny-x885.574.484.482
    YOLOv782.881.184.672
    YOLOv8-l89.671.984.184
    Im-YOLOv784.783.387.850
    Table 5. MessyTable dataset comparison experiment results
    Renxiang CHEN, Tianran QIU, Lixia YANG, Tenwei YU, Fei JIA, Cai CHEN. A method for dense occlusion target recognition of service robots based on improved YOLOv7[J]. Optics and Precision Engineering, 2024, 32(10): 1595
    Download Citation