• Acta Optica Sinica
  • Vol. 41, Issue 17, 1712001 (2021)
Haihua Cui1、*, Tao Jiang1, Kunpeng Du2, Ronghui Guo1, and An′an Zhao2
Author Affiliations
  • 1College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016, China
  • 2AVIC Xi′an Aircraft Industry Group Co., Ltd., Xi′an, Shaanxi 710089, China
  • show less
    DOI: 10.3788/AOS202141.1712001 Cite this Article Set citation alerts
    Haihua Cui, Tao Jiang, Kunpeng Du, Ronghui Guo, An′an Zhao. 3D Imaging Method for Multi-View Structured Light Measurement Via Deep Learning Pose Estimation[J]. Acta Optica Sinica, 2021, 41(17): 1712001 Copy Citation Text show less

    Abstract

    Multi-view structured light measurement is a process of utilizing structured light measurement system to achieve the complete expression of the measured object from multiple angles. Thus, the splicing of the measurement data from multiple perspectives affects the integrity of the measured object. In this paper, a new method is proposed to estimate the measurement pose using the deep learning and to directly align the multi-view data. The structure light measurement model is a four-step phase-shifting method combined with multi-frequency heterodyne method to realize the single high-precision three-dimensional reconstruction. In pose estimation, You only look once (YOLO) network is used to identify the 3D bounding box corner of the measured object, and perspective n point (PnP) algorithm is used to estimate the target pose. Since the coordinate systems of the measurement system and pose estimation are unified to the monocular camera, the data from multiple perspectives are directly spliced using the estimated pose. The feature descriptors of adjacent point clouds are established, and iterative closest point (ICP) algorithm is used to realize the high-precision stitching. The results show that the proposed measurement can effectively realize the multi-view structured light data splicing. The translation accuracy of pose estimation is better than 3 mm, the rotation accuracy is better than 1°, and the average deviation of stitching point cloud is 0.02 mm, which has a comparable accuracy level with that of using the method of the marker points. The proposed method is suitable for the multi-view structured light measurement with single pose estimation, which can improve the registration efficiency for multi-view measured data.
    Haihua Cui, Tao Jiang, Kunpeng Du, Ronghui Guo, An′an Zhao. 3D Imaging Method for Multi-View Structured Light Measurement Via Deep Learning Pose Estimation[J]. Acta Optica Sinica, 2021, 41(17): 1712001
    Download Citation