• Laser & Optoelectronics Progress
  • Vol. 57, Issue 4, 041506 (2020)
Yuzhen Liu1, Jiarong Zhang1、*, and Sen Lin1、2、3
Author Affiliations
  • 1School of Electronic and Information Engineering, Liaoning Technical University, Huludao, Liaoning 125105, China
  • 2State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, Liaoning 110016, China
  • 3Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, Liaoning 110016, China
  • show less
    DOI: 10.3788/LOP57.041506 Cite this Article Set citation alerts
    Yuzhen Liu, Jiarong Zhang, Sen Lin. Pose Estimation of Curved Objects Based on Binocular Vision and Vectors of the Tangent Plane[J]. Laser & Optoelectronics Progress, 2020, 57(4): 041506 Copy Citation Text show less
    Binocular vision measurement system model
    Fig. 1. Binocular vision measurement system model
    Conversion from depth map to point cloud. (a) Depth map; (b) point cloud
    Fig. 2. Conversion from depth map to point cloud. (a) Depth map; (b) point cloud
    Schematic of rotation of 3D coordinate system. (a) Rotate around X axis; (b) rotate around Y axis; (c) rotate around Z axis
    Fig. 3. Schematic of rotation of 3D coordinate system. (a) Rotate around X axis; (b) rotate around Y axis; (c) rotate around Z axis
    Position representation of spatial point in world coordinate system
    Fig. 4. Position representation of spatial point in world coordinate system
    Pose transformation of curved object
    Fig. 5. Pose transformation of curved object
    Flow chart of the algorithm
    Fig. 6. Flow chart of the algorithm
    Experimental environment
    Fig. 7. Experimental environment
    Experimental object. (a) Object 1; (b) object 2; (c) object 3; (d) object 4
    Fig. 8. Experimental object. (a) Object 1; (b) object 2; (c) object 3; (d) object 4
    Selected target corner points
    Fig. 9. Selected target corner points
    Comparison of measurement errors of our algorithm and three monocular visual pose algorithms. (a) X-axis translation error; (b) Y-axis translation error; (c) Z-axis translation error; (d) rotation error around the X-axis; (e) rotation error around the Y-axis; (f) rotation error around the Z-axis
    Fig. 10. Comparison of measurement errors of our algorithm and three monocular visual pose algorithms. (a) X-axis translation error; (b) Y-axis translation error; (c) Z-axis translation error; (d) rotation error around the X-axis; (e) rotation error around the Y-axis; (f) rotation error around the Z-axis
    Average error percentage of our algorithm and three monocular visual pose algorithms
    Fig. 11. Average error percentage of our algorithm and three monocular visual pose algorithms
    Comparison of measurement errors of our algorithm and two binocular visual pose algorithms. (a) X-axis translation error; (b) Y-axis translation error; (c) Z-axis translation error; (d) rotation error around the X-axis; (e) rotation error around the Y-axis; (f) rotation error around the Z-axis
    Fig. 12. Comparison of measurement errors of our algorithm and two binocular visual pose algorithms. (a) X-axis translation error; (b) Y-axis translation error; (c) Z-axis translation error; (d) rotation error around the X-axis; (e) rotation error around the Y-axis; (f) rotation error around the Z-axis
    Average error percentage of our algorithm and two binocular visual pose algorithms
    Fig. 13. Average error percentage of our algorithm and two binocular visual pose algorithms
    Data categoryTranslation /cmRotation angle /(°)
    txtytzϕθψ
    Ground truth5.00004.500010.00000.000010.000010.0000
    Estimated value6.62074.062611.39334.24429.922110.3074
    Estimation error1.62070.43741.39334.24420.07790.3074
    Average error1.15051.5432
    Table 1. Estimation results of object 1 pose change
    Data categoryTranslation /cmRotation angle /(°)
    txtytzϕθψ
    Ground truth10.00004.50000.00005.00000.00000.0000
    Estimated value11.36644.84670.75674.05350.59140.7830
    Estimation error1.36640.34670.75670.94650.59140.7830
    Average error0.82330.7736
    Table 2. Estimation results of object 2 pose change
    Data categoryTranslation /cmRotation angle /(°)
    txtytzϕθψ
    Ground truth5.00000.000010.00000.000020.00000.0000
    Estimated value6.18030.197711.73004.739719.58831.8791
    Estimation error1.18030.19771.73004.73970.41171.8791
    Average error1.03602.3435
    Table 3. Estimation results of object 3 pose change
    Data categoryTranslation /cmRotation angle /(°)
    txtytzϕθψ
    Ground truth0.00000.000010.00005.00000.00000.0000
    Estimated value0.48500.032710.05333.36280.86922.2322
    Estimation error0.48500.03270.05331.63720.86922.2322
    Average error0.19031.5795
    Table 4. Estimation results of object 4 pose change
    ObjectTranslation /cmRotation angle /(°)
    11.15051.5432
    20.82330.7736
    31.03602.3435
    40.19031.5795
    Table 5. Mean estimation error of each object
    AlgorithmRunning time /ms
    otxotyotzoϕoθoψ
    COPE11.808.408.208.508.508.75
    ICP595.00382.20408.40536.00618.00621.00
    Increased percentage /%98.0297.8097.9998.4198.6298.59
    Average improved efficiency /%98.24
    Table 6. Comparison of calculation efficiency between COPE and ICP algorithm
    AlgorithmRunning time /ms
    otxotyotzoϕoθoψ
    NDT210.00305.00429.20614.00570.00640.00
    Increased percentage /%94.3897.2598.0998.6298.5198.63
    Average improved efficiency /%97.58
    Table 7. Comparison of calculation efficiency between COPE and NDT algorithm
    Yuzhen Liu, Jiarong Zhang, Sen Lin. Pose Estimation of Curved Objects Based on Binocular Vision and Vectors of the Tangent Plane[J]. Laser & Optoelectronics Progress, 2020, 57(4): 041506
    Download Citation