• Laser & Optoelectronics Progress
  • Vol. 60, Issue 24, 2410006 (2023)
Xiaochang Fan, Yu Liang, and Wei Zhang*
Author Affiliations
  • School of Microelectronics, Tianjin University, Tianjin 300072
  • show less
    DOI: 10.3788/LOP230713 Cite this Article Set citation alerts
    Xiaochang Fan, Yu Liang, Wei Zhang. Infrared Vehicle Detection Algorithm Based on Improved Shuffle-RetinaNet[J]. Laser & Optoelectronics Progress, 2023, 60(24): 2410006 Copy Citation Text show less
    Overall architecture of infrared vehicle detection algorithm based on improved Shuffle-RetinaNet
    Fig. 1. Overall architecture of infrared vehicle detection algorithm based on improved Shuffle-RetinaNet
    Structure of DBAM
    Fig. 2. Structure of DBAM
    Different feature network design. (a) Conventional FPN; (b) PANet; (c) our network design
    Fig. 3. Different feature network design. (a) Conventional FPN; (b) PANet; (c) our network design
    Inconsistency of classification and regression
    Fig. 4. Inconsistency of classification and regression
    Partial infrared vehicle images in the dataset
    Fig. 5. Partial infrared vehicle images in the dataset
    Comparison of detection results before and after introducing calibration factor. (a) Before improvement; (b) after improvement
    Fig. 6. Comparison of detection results before and after introducing calibration factor. (a) Before improvement; (b) after improvement
    Comparison of detection effect of Shuffle-RetinaNet before and after improvement. (a) Before improvement; (b) after improvement
    Fig. 7. Comparison of detection effect of Shuffle-RetinaNet before and after improvement. (a) Before improvement; (b) after improvement
    Platform and toolsVersion
    Operating systemUbuntu 16.0.7
    Graphic processing unitNVIDIA Quadro RTX 8000
    Graphics driver510.47.03
    Programing languagePython 3.7
    FrameworkPyTorch 1.7.1
    Table 1. Experimental platform configuration
    BackboneNumber of parameters /106FLOPs /109Speed /(frame·s-1AP50 /%
    ResNet5036.1065.3514.287.4
    ShuffleNetV1 1.0×(g=3)11.9127.3229.884.9
    ShuffleNetV2 1.0×(g=3)11.1524.0932.486.9
    MobileNetV214.0126.9728.485.1
    MobileNetV314.9824.0330.386.1
    Table 2. Comparison of lightweight backbones
    MethodDBAMImproved feature networkCalibration factorAP50 /%Number of parameters /106FLOPs /109Speed /(frames-1
    RetinaNet87.436.1065.3514.2
    Shuffle-RetinaNet86.911.1524.0932.4

    Ours

    89.111.4424.2131.7
    91.111.7424.3531.1
    92.911.7424.3530.9
    Table 3. Overall ablation experimental results
    MethodDBAMImproved feature networkCalibration factorAPs /%APm /%APl /%
    RetinaNet45.259.566.5
    Shuffle-RetinaNet42.955.764.7

    Ours

    43.155.965.3
    44.958.465.9
    45.759.966.7
    Table 4. Ablation study results of multi-scale detection accuracy
    Parameter value of calibration factorAP /%AP50 /%
    α=0.5,β=656.291.0
    α=0.5,β=856.291.1
    α=0.5,β=1056.191.0
    α=1.0,β=656.992.4
    α=1.0,β=857.492.9
    α=1.0,β=1057.492.5
    α=1.5,β=656.391.4
    α=1.5,β=857.192.0
    α=1.5,β=1056.891.8
    Table 5. Results of calibration factor comparison
    Parameter value of calibration factorSelf-built datasetFLIR ADAS
    AP50 /%AP50 /%
    No calibration factor86.987.6
    α=0.5,β=687.187.9
    α=0.5,β=887.287.8
    α=0.5,β=1087.187.9
    α=1.0,β=688.989.5
    α=1.0,β=889.189.7
    α=1.0,β=1089.089.5
    α=1.5,β=688.689.1
    α=1.5,β=888.789.2
    α=1.5,β=1088.489.0
    Table 6. Results of calibration factor comparison on Shuffle-RetinaNet
    MethodAP50 /%Number of parameters /106FLOPs /109Speed /(frames-1
    Faster RCNN88.641.1267.7512.4
    RetinaNet87.436.1065.3514.2
    YOLOv5s88.37.2126.6731.6
    SSD51286.236.0460.8518.1
    Ours92.911.7424.3530.9
    Table 7. Comparision with classical object detection algorithms
    MethodAP/%Number of parameters /106
    Algorithms 12076.5720.60
    Algorithms 2685.0070.53
    Algorithms 32191.30
    Algorithms 42290.508.10
    Ours91.7011.74
    Table 8. Comparision with classical infrared vehicle detection algorithms