• Infrared and Laser Engineering
  • Vol. 52, Issue 3, 20220618 (2023)
Ronghua Li1, Meng Wang1, Wei Zhou2, and Jiaru Fu1
Author Affiliations
  • 1Institute of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
  • 2No.91550 Unit of the PLA, Dalian 116023, China
  • show less
    DOI: 10.3788/IRLA20220618 Cite this Article
    Ronghua Li, Meng Wang, Wei Zhou, Jiaru Fu. Pose estimation of flying target based on bi-modal information fusion[J]. Infrared and Laser Engineering, 2023, 52(3): 20220618 Copy Citation Text show less

    Abstract

    ObjectiveFlying target pose estimation is the key technology to realize trajectory prediction and missile guidance control. Real-time calculation of missile posture is conducive to judge whether the missile hits the target, timely detect missile failure, and carry out early destruction. The development of information and intelligent technology has led to the improvement of data acquisition accuracy of color camera, laser radar and other sensors, which has formed a technical system of data acquisition by sensors and target position and attitude estimation by relevant algorithms. Most of the existing target pose estimation methods can effectively detect and estimate the target pose. However, there are some problems in the precision prediction and guidance control of missile landing point, such as the inability to quickly and accurately extract and estimate the position and attitude of the flying target in the complex background. Therefore, on the premise of ensuring being real-time, a method for estimating the pose of flying targets based on area array lidar and bi-modal information fusion is proposed.MethodsFirstly, the coordinate transformation model between the camera and the lidar is established to realize the pixel level matching of the two sensors, fusing the image and point cloud at the same time (Fig.2); Secondly, the ViBe (Visual Background Extractor) algorithm and depth information fusion algorithm are used to extract the moving target in the image, and select the corresponding point cloud according to the image moving target position box (Fig.5); Finally, the PnP (Perspective-n-Point) algorithm is used for rough registration of feature points (Fig.8), to obtain the initial rotation and translation matrix between point clouds. And using I-Kd Tree (Incremental K-dimensional Tree) to accelerate the search of adjacent points, the ICP (Iterative Close Point) algorithm is used for accurate registration, to improve the registration speed.Results and DiscussionsSimulation test and hardware-object simulation test are used to verify the accuracy and stability of the method. The results show that the accuracy of the two-dimensional image object detection algorithm is 97% (Tab.3), and the error classification ratio is 0.0112% (Tab.3). Compared with the traditional ICP algorithm, the accuracy of the pose estimation algorithm is improved by 53.2% (Tab.2), the single time consumption is reduced to 132 ms from 261 ms (Tab.2). Compared with other algorithms, the pose estimation algorithm also has certain advantages.ConclusionsAn algorithm for estimating the pose of flying targets based on bi-modal information fusion is proposed, which can effectively estimate the pose of flying targets on the basis of selecting appropriate parameters. The accuracy of the algorithm is verified by simulation tests. 50 frames of data are simulated and the average error is calculated under the initial condition that the initial object distance between the target and the lidar is 30 m. The simulation results show that the X-axis error is 1.06 mm, the Y-axis error is 4.59 mm, the Z-axis error is 2.07 mm, the Y-axis rotation angle error is 0.63°, the Z-axis rotation angle error is 1.01°, and the solution time is 132 ms. The accuracy of the algorithm is verified by semi-physical ground experiments, and the precision (P) in the image target extraction test is 0.97, the recall (R) is 0.844, and the percentage of wrong classification (PWC) is 0.011 2%. The statistical average error in the pose estimation test is respectively 4.9 mm for X-axis error, 2.7 mm for Y-axis error, 4.62 mm for Z-axis error, 0.97° for Y-axis rotation angle error and 0.89° for Z-axis rotation angle error. The defect that the single source data of the proposed method is difficult to describe the moving target comprehensively can be remedied, and an objective solution is provided for the position and attitude estimation of the flying target. The proposed method is applied to the accurate prediction and guidance control of the landing point of flying targets, and has high military application value.
    Ronghua Li, Meng Wang, Wei Zhou, Jiaru Fu. Pose estimation of flying target based on bi-modal information fusion[J]. Infrared and Laser Engineering, 2023, 52(3): 20220618
    Download Citation