• Laser & Optoelectronics Progress
  • Vol. 56, Issue 19, 191506 (2019)
Rui Bai, Youchun Xu, Yongle Li*, Jiong Li, and Feng Xie
Author Affiliations
  • Army Military Transportation University, Tianjin 300161, China
  • show less
    DOI: 10.3788/LOP56.191506 Cite this Article Set citation alerts
    Rui Bai, Youchun Xu, Yongle Li, Jiong Li, Feng Xie. Digital Character Recognition Technique for Intelligent Vehicles in Road Scenes[J]. Laser & Optoelectronics Progress, 2019, 56(19): 191506 Copy Citation Text show less
    Flowchart of proposed algorithm
    Fig. 1. Flowchart of proposed algorithm
    Comparison between proposed extraction method and traditional MSER extraction method. (a) Original diagram of a parking space number; (b) traditional MSER extraction effect; (c) MSER extraction effect for S channel
    Fig. 2. Comparison between proposed extraction method and traditional MSER extraction method. (a) Original diagram of a parking space number; (b) traditional MSER extraction effect; (c) MSER extraction effect for S channel
    Character edge enhancement process. (a) Character candidate regions; (b) edge extraction graph of character candidate regions; (c) edge enhancement graph of characters
    Fig. 3. Character edge enhancement process. (a) Character candidate regions; (b) edge extraction graph of character candidate regions; (c) edge enhancement graph of characters
    Edge extraction and stroke width maps of character “6”. (a) Edge extraction of character “6”; (b) stroke width of character “6”
    Fig. 4. Edge extraction and stroke width maps of character “6”. (a) Edge extraction of character “6”; (b) stroke width of character “6”
    Diagram of the Lenet-5 network
    Fig. 5. Diagram of the Lenet-5 network
    Diagrams of characters in the dataset. (a) Diagram before dataset segmentation; (b) diagram after rough dataset segmentation
    Fig. 6. Diagrams of characters in the dataset. (a) Diagram before dataset segmentation; (b) diagram after rough dataset segmentation
    Train loss and test accuracy vary with number of iterations
    Fig. 7. Train loss and test accuracy vary with number of iterations
    Experimental platform
    Fig. 8. Experimental platform
    Diagrams of character positioning results. (a) Character connected regions; (b) character location results in road scenes; (c) character connected regions; (d) character location results in road scenes
    Fig. 9. Diagrams of character positioning results. (a) Character connected regions; (b) character location results in road scenes; (c) character connected regions; (d) character location results in road scenes
    Character recognition results. (a) Recognition result of character ‘1’; (b) recognition result of character ‘4’; (c) recognition result of character ‘6’
    Fig. 10. Character recognition results. (a) Recognition result of character ‘1’; (b) recognition result of character ‘4’; (c) recognition result of character ‘6’
    ParameterAreaEccentricitySolidityRatio
    Threshold[75,600][0.1,0.995][0,0.4][0.3,7]
    Table 1. Geometric constraint filter parameters
    Input scale /(pixel×pixel)18×1824×2428×2836×3648×48
    Drop value /%8.31.80.62.42.9
    Table 2. Experimental results of recognition rate drop-out values under different input scales
    Character0123456789
    Number1811178618371826177818391745180517981781
    Table 3. Number of characters in the dataset
    MethodCETHRF
    Neumann[21]39970830.850.820.83
    Epshtein[24]39866840.860.820.84
    Lee[7]40463780.860.830.84
    Zhang[16]41154710.880.850.86
    Chen[13]41776650.840.860.85
    Sung[5]42060620.840.870.87
    Huang[9]42251600.890.890.88
    Ours43850440.890.900.89
    Table 4. Character location performance comparison of different algorithms
    MethodNG /%
    KNN24180.1
    HOG+SVM23779.4
    BP Neural Network23076.8
    Ours26588.6
    Table 5. Comparison of character recognition effects of different algorithms
    Rui Bai, Youchun Xu, Yongle Li, Jiong Li, Feng Xie. Digital Character Recognition Technique for Intelligent Vehicles in Road Scenes[J]. Laser & Optoelectronics Progress, 2019, 56(19): 191506
    Download Citation