• Acta Photonica Sinica
  • Vol. 50, Issue 11, 1112003 (2021)
Yongying DONG1, Gaopeng ZHANG2、*, Sansan CHANG2, Zhi ZHANG2, and Yanjie LI3
Author Affiliations
  • 1School of Electronic Engineering,Xi'an University of Posts and Telecommunications,Xi'an 710121,China
  • 2Xi'an Institute of Optics and Precision Mechanics,Chinese Academy of Sciences,Xi'an 710119,China
  • 3School of Mechatronic Engineering,Xi'an Technological University,Xi'an 710021,China
  • show less
    DOI: 10.3788/gzxb20215011.1112003 Cite this Article
    Yongying DONG, Gaopeng ZHANG, Sansan CHANG, Zhi ZHANG, Yanjie LI. A Pose Measurement Algorithm of Space Target Based on Monocular Vision and Accuracy Analysis[J]. Acta Photonica Sinica, 2021, 50(11): 1112003 Copy Citation Text show less

    Abstract

    Aiming at the low efficiency of traditional orthogonal iterative algorithm, an improved orthogonal iterative algorithm is proposed to measure the pose of space objects based on monocular vision. Firstly, based on the traditional orthogonal iterative algorithm, the translation vector in the iterative process was eliminated. The initial value of the rotation matrix was solved by using the parallel perspective model instead of the weak perspective projection model, so as to accelerate the solution process of the orthogonal iterative algorithm. Secondly, simulation experiments are used to study the effects of the extraction accuracy of the imaging point, the accuracy of three-dimensional coordinate of the space feature point, the calibration accuracy of the camera principal point, the calibration accuracy of the camera focal length and the number of space feature points on the accuracy and efficiency of the algorithm. Based on the results of the simulation experiments, Taguchi method is used to quantitatively analyze the influence of each factor on the accuracy of the algorithm, and find out the factor that has the greatest influence on the accuracy of the improved orthogonal iteration algorithm. Finally, the performance of the proposed improved orthogonal iteration algorithm is tested by the physical experiments. The physical experiments prove that the proposed method is accurate, and takes shorter run time than that of traditional orthogonal iteration algorithm. Based on the results of the orthogonal experiment, the accuracy of the improved orthogonal iteration algorithm could meet different demand of different space task by controlling the different influence factors.

    Introduction

    Measuring the relative posture and position(pose)of the close-range spacecrafts is the key premise for various space tasks like space rendezvous-docking and on-orbit maintenance. With several disadvantages like large size and mass,expensive and complex system,the commonly used global positioning system,radar detection systems and laser scanners are not suitable for some special space tasks. Contrary to the above methods,the computer vision has widely used in various important space missions with its excellent performance,namely,low cost,small size and mass1-2. Compared with binocular and multi-camera vision pose measurement methods,monocular vision-based pose measurement methods have the advantages such as simple system,concise camera calibration procedure,wide measurement field of view,low cost,and good real-time performance. By using the relationship of the several space feature points,the pose of the space target could be estimated. This feature point-based pose estimation method,also known as Perspective-n-Point(PnP)problem,was first proposed by Fishchler3 in 1981.

    Many scholars did a great deal of work on the PnP problem,and some representative conclusions are as follows. When the number of the space feature points n<3,the PnP problem has no definite solution. If n=3 and the plane determined by the three feature points does not pass through the optical center of the camera,there are at most four solutions for the PnP problem. Under the condition of n=4 and the four feature points are coplanar,there is only one solution for the PnP problem. In the case of four feature points are not coplanar,there are multi solutions. For the P5P problem,there are at most two solutions. Once the number of the space feature points n>5,the PnP problem will be transformed into the classical Direct Linear Transformation(DLT)problem,and pose of the target will be estimated linearly4-6.

    To sum up,the solution methods of the PnP problem can be divided into two categories:non-iterative methods and iterative methods. The DLT method is one of the most classical non-iterative methods,which obtained the pose of the target by using at least four coplanar feature points or six non-coplanar feature points. The solution process of the DLT method can not guarantee the orthogonality of the rotation matrix. As a result,the robustness of DLT method is poor7. The iterative method generally takes the reprojection error of the image plane as the objective function,and then iteratively solves the PnP problem by using nonlinear optimization algorithms such as Gauss-Newton method and Levenberg-Marquardt method. The iterative method generally shows noise immunity performance with high accuracy. However,the iterative method shows poor real-time performance and the accuracy of the iterative method strongly depends on the accurate initial values. To overcome these shortcomings,DEMENTHON D et al8 proposed the Pose from Orthography and Scaling with Ierations(POSIT)algorithm. The POSIT algorithm obtains the initial estimation of the pose by using the weak perspective projection model,and then iteratively approximates the perspective projection model to obtain the final estimation of the pose. The POSIT algorithm is a two-step method with high accuracy. However,the rotation matrix obtained by POSIT algorithm does not equate to the optimal rotation matrix. To solve this problem,LU C et al9 proposed the Orthogonal Iteration(OI)algorithm. The OI algorithm directly obtains the orthogonal rotation matrix by using the target co-linearity error as the objective function. The OI algorithm not only has high precision but also shows good robustness and excellent global convergence performance,leading the OI algorithm becomes one of the most widely used monocular vision pose estimation algorithm. However,since the OI algorithm is an iterative method,the OI algorithm still shows low computation efficiency.

    Besides,several factors,such as the extraction accuracy of the imaging point,the calibration accuracy of the camera intrinsic parameters and the number of space feature points,affect the accuracy of the above monocular vision pose estimation algorithm. Moreover,the influence of these factors on the final pose measurement results is not a simple linear superposition relationship. Most studies have only focused on the influence of a single factor on the accuracy of the pose estimation algorithm. In order to comprehensively and quantitatively analysis the accuracy of the pose estimation algorithm,the Taguchi method is introduced in this paper. Taguchi method was founded by Dr. TAGUCHI G,a well-known quality management expert in Japan10. Taguchi method quantitatively describes the influence of various influencing factors on product quality by adjusting design parameters. Taguchi method is based on orthogonal experiments,and through the statistical analysis of the experimental scheme,the optimal level combination of each parameter value is found,so as to quantitatively study the influence of each factor on the accuracy of the pose measurement11-12.

    In this paper,aiming to improve the computation efficiency of the OI algorithm,an improved orthogonal iteration(IOI)algorithm is proposed by eliminating the translation vectors in the iterative process. Then,the accuracy of the IOI algorithm is quantitatively analyzed based on the simulation experiments and Taguchi method. And the primary factor that affecting the accuracy of the IOI algorithm is clarified. Finally,the performance of the IOI algorithm is verified by physical experiments. Compared with the traditional OI algorithm,the computation efficiency of the IOI algorithm is significantly improved and the accuracy of the IOI algorithm is slightly higher. In addition,based on the quantitative accuracy analysis of IOI algorithm,the accuracy of the IOI algorithm can meet the requirements of different space tasks by controlling the primary factor that affecting different pose parameters.

    Improved orthogonal iteration algorithm

    Principles of traditional OI algorithm

    The principle of traditional OI algorithm is shown in Fig. 1. Suppose there are n space feature points in the target coordinate system,and the coordinate of the ith space feature point in the target coordinate system are Pi=[xi,yi,zi]T,the homogeneous coordinates of the corresponding normalized imaging points are pi=[ui,vi,1]T,the line-of-sight projection matrix of the image point is defined as follows

    Schematic diagram of the principle of orthogonal iterative algorithm

    Figure 1.Schematic diagram of the principle of orthogonal iterative algorithm

    Wi=pipiTpiTpi

    where Wiis the line-of-sight projection matrix,and the line of sight refers to the ray from the optical center to the imaging point. The basic principle of the traditional OI algorithm is that the space point should coincide with the projection point of the corresponding image point on the line of sight,and the principle of the co-linear equation is defined as follows

    RPi+t=Wi(RPi+t)

    where R and t stand for the camera pose in the target coordinate system. R is the rotation matrix,and t is the translation vector. Based on Eq.(2),the colinear error can be defined as follows

    Err(R,t)=i=1n(I-Wi)(RPi+t)2

    where I stands for the unit matrix. Theoretically,the space point should coincide with the projection point of the corresponding image point on the line of sight,and the collinear error ErrRt)should be zero. But in reality,several inevitable errors(algorithm error,system error,computing error)make it impossible for the collinear error ErrRt)to be zero. The optimization goal of the traditional OI algorithm is to find out the global minimum of the collinear error. The optimization function is defined as follows

    minErr(R,t)=i=1n(I-Wi)(RPi+t)2

    For Eq.(4),if having prior knowledge of the rotation matrix R,the optimal solution of the translation vector is as follows

    t(R)=1nI-1ni=1nWi-1i=1n(Wi-I)RPi

    For such a univariate minimization problem,the optimal value of t can be obtained by the iterative method. Since the traditional OI algorithm is globally convergent,the initial value of rotation matrix R0 could be estimated randomly. Then the Singular Value Decomposition(SVD)method13-14 could be used to update the rotation matrix and the translation vector iteratively. By the preset iterative termination condition,the optimal solution of the rotation matrix and the translation vector is obtained.

    IOI algorithm

    Since the traditional OI algorithm is globally convergent,the initial value of the rotation matrix R0 can be given arbitrarily. However,the larger deviation between the initial value and the optimal value of rotation matrix,the more time of the algorithm takes. For traditional OI algorithm,the weak perspective projection model is used to initialize the rotation matrix. However,the space feature points used in the traditional OI algorithm generally selected randomly,and the image points of the space feature points may concentrate in a small edge region of the image under different poses. In this case,the deviation between the initial value and the optimal value of rotation matrix is quite large,which leads to a slow convergence of the algorithm or even to a wrong solution. Therefore,in this paper,in order to improve the overall robustness and efficiency of the traditional OI algorithm,we use the parallel perspective model instead of the weak perspective model to initialize the rotation matrix.

    For the weak perspective projection model,a space feature point is projected orthogonally onto a plane that is parallel to the image plane and passes over the object center. In this process,the position information of the space feature point is lost. If the space feature point is far from the optical axis,the error introduced by the weak perspective model is significant15.

    Assuming that the homogeneous coordinates of the space feature points Pi are Pi=(Xi,Yi,Zi,1)T,and the corresponding homogeneous coordinates of the image points Pi are pi=(xi,yi,1)T. The homogeneous coordinates of the centroid of the space feature points set is Pi¯=(X0,Y0,Z0,1)T,and the homogeneous coordinates of the image point of the centroid is pi¯=(x0,y0,1)T.For the parallel perspective model,a space feature point is also projected orthogonally onto a plane that is parallel to the image plane and passes over the object center. However,unlike the weak perspective projection model,the projection line is not parallel to the optical axis,but parallel to the line between the centroid of the space feature points and the camera optical center. The parallel perspective model can be expressed as follows

    x=1Z0X-X0Z0Z+X0y=1Z0Y-Y0Z0Z+Y0

    The initial value of the rotation matrixR0is as follows

    R0=Z0b+x0{[I-Z0y0(a)anti+Z0x0(b)anti]-1Z02(a×b)}Z0b+y0{[I-Z0y0(a)anti+Z0x0(b)anti]-1Z02(a×b)}[I-Z0y0(a)anti+Z0x0(b)anti]-1Z02(a×b)

    where aanti and banti denote the ant symmetric matrices corresponding to vectors a and b,respectively. a and b are three-dimensional column vectors,respectively,which can be determined by the following equation

    (aT,x0)T=i=1nPiPiT-1i=1nxiPi(bT,y0)T=i=1nPiPiT-1i=1nyiPi

    Based on Eqs.(7)and(8),the initial value of the rotation matrix based on the parallel perspective model could be obtained. Then according to the traditional OI algorithm,the optimal solution of the rotation matrix and the translation vector could be obtained.

    In addition,the traditional OI algorithm computesR and t respectively in each iteration process. In fact,the translation vector t can be computed linearly after each updating of the rotation matrix R. Essentially,each iteration process is an iteration of the rotation matrix. The solution of translation vector is only an intermediate value in every iteration except for the last iteration which needs to output the final result. Therefore,the solution of the translation vector t can be eliminated in the intermediate iteration process and the calculation amount of the OI algorithm could be further reduced.

    Accuracy analysis based on simulation experiment

    In general,the accuracy of the IOI algorithm is mainly affected by the extraction accuracy of the imaging point,the accuracy of three-dimensional(3D)coordinate of the space feature point,the calibration accuracy of the camera principal point,the calibration accuracy of the camera focal length and the number of space feature points. In this Section,the influence of the above factors on the accuracy of the IOI algorithm is investigated by simulation experiments.

    The simulation experiment uses a virtual camera with a resolution of 2 000×1 500. Assuming that the pixel coordinates of the camera principal point are[u0v0]=[1 000,750]and the normalized focal length is fx=fy=1 500. The four vertices of a 400 mm×200 mm rectangular are selected as four virtual space feature points. The translation vector between the target coordinate system and the camera coordinate system is t=[200 mm,200 mm,1 000 m],and then the four virtual feature points imaged at a randomly generated pose. The results of the simulation experiment are the mean square values of the results of 100 experiments.

    Influence of the extraction accuracy of imaging point

    The extraction accuracy of the imaging point is highly related to the accuracy of the IOI algorithm. For the current feature point extraction algorithm,ideally,the extraction accuracy is 0.01 pixels. In general,the extraction accuracy of lines,cross wires and ring marks is better than 0.1 pixels. In real close-range space tasks,the accuracy of the feature point extraction algorithms is within 1 pixel16-18. Therefore,in this Section,in order to test the influence of extraction accuracy of the imaging point on the accuracy of the IOI algorithm,a random white noise with an amplitude of 0~1 pixel is added to the coordinate of the imaging point. Fig. 2 and Table 1 show the results of the simulation experiment. ΔAxΔAy and ΔAz denote the absolute values of the position error between the calculated position by the IOI algorithm and the real position value,and ΔPxΔPy and ΔPz denote the absolute values of the posture error between the calculated posture by the IOI algorithm and the real position value,respectively. The measurement error of IOI algorithm increases with the increase of the added noise. When the added noise is 1 pixel,the maximum posture error is less than 80″,and the maximum position error is less than 0.52 mm.

    Influence of the extraction accuracy of imaging point on the accuracy of the IOI algorithm

    Figure 2.Influence of the extraction accuracy of imaging point on the accuracy of the IOI algorithm

    Table Infomation Is Not Enable

    In addition,since only four virtual space feature points are used in the above simulation experiments,in order to study the influence of the number of feature points on the performance of the IOI algorithm proposed in this paper,different numbers of feature points are randomly selected in a 400 mm×400 mm×400 mm area in the target coordinate system,and the influence of the number of the feature points on the performance of the IOI algorithm are studied. The results are compared with the results of traditional OI algorithm9,and the extraction accuracy of the imaging point is set to 0.1 pixels.

    Fig. 3 shows that with the increase of the number of feature points,the errors of the IOI algorithm and OI algorithm both decrease. When the number of feature points increases from four to eight,the error of IOI algorithm significantly decreases. When the number of feature points is more than eight,the error of IOI algorithm decreases slightly. However,when the number of feature points is more than ten,the error of IOI algorithm is almost unchanged. For IOI algorithm,the parallel perspective model is used instead of the weak perspective model to initialize the rotation matrix,thus when the number of the number of feature points is less than eight,the error of IOI algorithm is slightly smaller than that of OI algorithm. However,since the principles of the IOI algorithm is almost the same as that of the OI algorithm,the error of the two algorithms are almost the same when the number of feature points is more than eight.

    Influence of the number of feature points on the accuracy of the IOI algorithm

    Figure 3.Influence of the number of feature points on the accuracy of the IOI algorithm

    The increase in the number of feature points leads to higher accuracy,especially when the number of feature points is less than eight. However,more feature points generally lead to lower computational efficiency and more running time. Therefore,the number of feature points,the accuracy and efficiency of the IOI algorithm must be considered comprehensively. Fig. 4 shows the running efficiency of the IOI algorithm and the OI algorithm. The running time of the IOI algorithm and OI algorithm both increase with the increase of the feature point number. However,the computational efficiency of the IOI algorithm is greatly improved because it does not need to calculate the translation vector in each iteration. Specifically,the IOI algorithm has fewer iterations and shorter single run time than that of OI algorithm. For measuring the relative pose of the close-range spacecrafts,on one hand,it is impossible to provide abundant feature points in space,and on the other hand,the accuracy of pose measurement based on four feature points is high enough. Thus,it is suggested that four feature points is enough in the proposed IOI algorithm.

    Comparison of the algorithm operation efficiency

    Figure 4.Comparison of the algorithm operation efficiency

    Influence of the accuracy of 3D coordinate of the feature point

    Since the depth information of the target could not recovered directly by single camera,several space feature points are taken as the prior knowledge for the pose measurement algorithm based on monocular vision. In fact,there are inevitably error between the 3D coordinate of the feature point and its real value,which eventually affects the result of pose measurement. In this Section,an error with an amplitude of 0.01~1.0 mm is added to the 3D coordinate of the feature point,and the influence of the accuracy of 3D coordinate of the feature point on the IOI algorithm is studied. As shown in Fig. 5,the error of IOI algorithm increases with the increase of the added error. When the added error is 1.0 mm,the maximum posture error is about 130″,and the maximum position error is about 0.9 mm.

    Influence of the accuracy of 3D coordinate of the feature point on the accuracy of the IOI algorithm

    Figure 5.Influence of the accuracy of 3D coordinate of the feature point on the accuracy of the IOI algorithm

    It should be noted that the results in Fig. 5 are based on the simulation results with four space feature points,and it is theoretically possible to reduce the error the IOI algorithm by increasing the number of space feature points. The influence of the number of feature points on the performance of the IOI algorithm is studied by simulation experiment,in which the accuracy of 3D coordinate of the feature point is set to 0.1 mm. The results of the simulation experiments are shown in Fig. 6 and Fig. 7.

    Influence of the number of feature points on the accuracy of the IOI algorithm

    Figure 6.Influence of the number of feature points on the accuracy of the IOI algorithm

    Comparison of the algorithm operation efficiency

    Figure 7.Comparison of the algorithm operation efficiency

    Since the parallel perspective model is used to calculate the initial value of the rotation matrix,the error of the IOI algorithm proposed in this paper is slightly smaller than that of OI algorithm when the number of feature points is less than eight. Since the principles of the IOI algorithm is the same as that of OI algorithm,the errors of the two algorithms are almost the same when the number of feature points is more than eight. The running efficiency of the IOI algorithm has significantly improved than OI algorithm. However,it is inevitable that the operational efficiency of the IOI algorithm decreases with the increase of the number of feature points. Considering the accuracy and efficiency of the algorithm comprehensively,it is suggested that four feature points is enough in the proposed IOI algorithm.

    Influence of the calibration accuracy of camera focal length

    The result of the IOI algorithm is influenced by the intrinsic parameters of the camera(focal length,coordinates of principal point,and distortion coefficient),and the calibration accuracy of camera focal length and the coordinates of the principal point have significantly influence on the performance of the IOI algorithm. In this Section,simulation experiments are conducted to study the influence of the calibration accuracy of the camera focal length on the IOI algorithm. With the commonly used focal length calibration algorithms,the relative calibration error of the focal length is generally within 5%19-22. Therefore,in the simulation experiments,the relative errors of the focal length were varied from 0.1%~5% to study the influence of the calibration accuracy of the camera focal length on the IOI algorithm.

    It can be seen from Fig. 8,when the relative error of the camera focal length is 0.1%,the maximum posture error is about 20″,which indicates the error of the focal length has little effect on the posture error of the IOI algorithm. The position error in the direction along optical axis(z-axis)is about 13 mm,and the position errors in the other two directions are less than 1 mm,which means that the calibration error of the camera focal length mainly affects the position error in the direction along optical axis.

    Influence of the calibration accuracy of the camera focal length on the accuracy of the IOI algorithm

    Figure 8.Influence of the calibration accuracy of the camera focal length on the accuracy of the IOI algorithm

    The above simulation experiments in this Section were conducted based on four space feature points. In addition,the influence of the calibration accuracy of the camera focal length on the accuracy and run time of the IOI algorithm were tested with different numbers of space feature points. The results show that if the number of space feature points increases,the accuracy of the IOI algorithm can be improved,but at the same time,the computational efficiency will be reduced. Since the results is similar to Section 2.2 and Section 2.3,it won't be repeated in this Section.

    Influence of calibration accuracy of the camera principal point

    In this Section,simulation experiments are conducted to study the influence of the calibration accuracy of the camera principal point on the IOI algorithm. With the commonly used camera calibration algorithms,the relative calibration error of the principal point coordinate is generally within 5%23-25. Therefore,in the simulation experiments,the relative errors of the principal point coordinate were varied from 0.1%~5% to study the influence of the calibration accuracy of the camera principal point on the IOI algorithm.

    As shown in Fig. 9,when the relative calibration error of the principal point coordinate is 5%,the maximum posture error is about 190″,and the maximum position error is about 12 mm. It should be noted that the position error in the direction along optical axis(z-axis)is about 1 mm,and the position errors in the other two directions are almost ten times larger than that in the direction of z-axis,which means that the calibration error of principal point mainly affects the position errors in the direction of x-axis and y-axis. In addition,the above simulation results are based on four space feature points. If the number of space feature points increases,the accuracy of the OI algorithm will be improved at the cost of the lower computational efficiency.

    Influence of the calibration accuracy of the camera principal point on the accuracy of the IOI algorithm

    Figure 9.Influence of the calibration accuracy of the camera principal point on the accuracy of the IOI algorithm

    Quantitative accuracy analysis of the IOI algorithm based on Taguchi method

    As mentioned in Section two,the accuracy of the IOI algorithm is mainly affected by the extraction accuracy of the imaging point,the accuracy of 3D coordinate of the feature point,the calibration accuracy of the camera principal point,the calibration accuracy of the camera focal length and the number of space feature points. Since the influence of these factors on the accuracy of the IOI algorithm is not a simple linear superposition relationship,it is necessary to quantitatively analyze the influence of each factor on the accuracy of the IOI algorithm,and find out the factors that have the greatest influence on the results of IOI algorithm. Based on the quantitatively analyze,the IOI algorithm could be further optimized to meet different requirements in different space tasks.

    In this Section,the influences of the extraction accuracy of the imaging point,the accuracy of 3D coordinate of the feature point,the calibration accuracy of the camera focal length,the calibration accuracy of the camera principal point and the number of space feature points on the accuracy of the IOI algorithm were quantitatively analyzed based on Taguchi method and the results of simulation experiments.

    Factors,levels,and orthogonal array

    For the pose measurement tasks,a smaller error of pose measurement is always preferred. Therefore,in the quantitatively analysis based on Taguchi method,the errors of the IOI algorithm(ΔAxΔAyΔAzΔPxΔPyΔPz)were used as the objective function. Based on the results of simulation experiments,the influences of the extraction accuracy of the imaging point,the accuracy of 3D coordinate of the feature point,the calibration accuracy of the camera focal length,the calibration accuracy of the camera principal point and the number of space feature points on the accuracy of the IOI algorithm were quantitatively analyzed. Considering the variation range of the above five factors,the levels of each factor are shown in Table 2.

    Table Infomation Is Not Enable

    As shown in Table 2,three typical levels were selected for each factor. If we calculate these factors one after the other,there are up to 35=243 different models. The Taguchi method provides a simple and effective path to regulate the undetermined factors. As shown in Table 3,we established an orthogonal array of L18(35)based on Taguchi method,and only 18 simulation experiments are needed in the quantitative analysis.

    Table Infomation Is Not Enable

    Signal-to-noise ratio analysis

    Taguchi described a method to utilize Signal-to-Noise Ratio(SNR)in orthogonal experiment design for quality engineering 26. SNR helps researchers determine which levels of control factors are more efficient. In this paper,18 orthogonal experiments were established based on the results of the simulation experiments,and the results of the 18 simulation experiments are shown in Table 4.

    Table Infomation Is Not Enable

    Based on the results of the 18 simulation experiments,the SNR of the errors of different pose parameter in each direction was calculated. The following is an example of the process for calculating the SNR of ΔPz. The SNR of ΔPzSNRΔPz)can be derived from the following equation

    SNRΔPz=-10log1nt=1n1ΔPz2

    where n is the number of repetition time of each simulation experiment. Since the results of the simulation experiments are the mean square values of the results of 100 experiments,n is set to one in this study. Based on Eq.(9),the SNRΔPz of 18 simulation experiments was obtained,and the results are shown in the last column of Table 5. The SNR of ΔAxΔAyΔAzΔPx and ΔPy can be obtained similarly,and the results are also shown in Table 5.

    Table Infomation Is Not Enable

    Then,the range of SNRΔPz was calculated. As shown in Table 6,the first number in row T1(69.34)is the summation of the six SNR numbers,which corresponds to the six simulation experiments when factor A is in level one. All the numbers in the rows of T1 to T3 can be calculated in the same way. The rank of the SNR refers to the difference between the maximum and minimum values in T1-T3. The larger the rank,the higher influence of the factor on the accuracy of IOI algorithm. The rank(R)and contribution ratio(σ)in Table 6 are respectively defined by the following equations26-27

    Ri=maxTj-minTj(j=1,2,3)
    SNRΔPz=-10log1nt=1n1ΔPz2
    Table Infomation Is Not Enable

    The contribution ratio stands for the influence of each factor on ΔPz (position error in the direction along optical axis). Table 6 shows that factor C(calibration accuracy of the camera focal length)has the greatest influence on ΔPz. Therefore,for the pose measurement tasks that need the high position accuracy in the direction along optical axis,the calibration accuracy of camera focal length should be high.

    By using the same method,the SNR of ΔAxΔAyΔAzΔPx and ΔPy,and the influence of the five factors on ΔAxΔAyΔAzΔPx and ΔPy were quantitatively analyzed based on Taguchi method and the results of simulation experiments. Since the process of the quantitative analysis is the same,it won't be repeated,and only the results of the quantitative analysis are shown in Table 7.

    Table Infomation Is Not Enable

    As shown in Table 7,the calibration accuracy of the camera principal point has the greatest influence on the posture errors,and the number of feature points and the extraction accuracy of the imaging point also influence the posture errors significantly. Therefore,for the pose measurement tasks that need high posture accuracy,high calibration accuracy of the camera principal point,more feature points,and high extraction accuracy of the imaging point are crucially important. The position error in the direction along optical axis(ΔPz)is mainly affected by the calibration accuracy of camera focal length. The position errors in the other two directions are mainly influenced by the calibration accuracy of the camera principal point.

    Physical experiments

    Verification experiment for the accuracy of the IOI algorithm

    In this Section,the accuracy of the IOI algorithm is studied by physical experiments. The system of the physical experiment is shown in Fig. 10. The space camera is mounted on a three-dimensional turntable,which can simulate the relative rotation between the camera and the target around three axes. The Tiangong Ⅱ satellite model is mounted on a one-dimensional displacement table. The real pose between the camera and the satellite model could be obtained by the Inertial Navigation System(INS). Since the accuracy of the INS is much higher than that of the monocular vision measurement system,the relative pose between the camera and the satellite model obtained by the INS is taken as the real pose. The four corners of the solar panel are used as the feature points. The distance between the camera and the satellite model is about 1 m. The variation of the relative pose between the satellite model and the camera is used to evaluate the accuracy of the IOI algorithm proposed in this study.

    Experimental system diagram

    Figure 10.Experimental system diagram

    The process of the experiment is as follows. Firstly,the turntable rotates around three axes with the preset angle respectively. Similarly,the one-dimensional displacement table moves with the preset translation,and the images of the satellite model at different pose are obtained. Then,the relative pose between the satellite model and the camera at each position is calculated by the IOI algorithm. Finally,the variation of the relative pose is obtained and compared with its real value obtained by the INS.

    Firstly,an image of the satellite model was taken at a random pose,which was recorded as the 0th image. Then,the camera was rotated around the x-axis with the preset angle,and 20 images of the satellite model were taken. Fig. 11 shows part of the images taken by the camera. The real relative pose between the satellite model and the camera at each shooting time are obtained by the INS. The increase of the rotation angle between adjacent pose was set to 0.25° for the first 10 images,and the increase of the rotation angle between adjacent pose was set to 0.5° for the last 10 images. Similarly,the camera was rotated around the other two axes,and the relative pose between the satellite model and the camera at each posture is calculated by the IOI algorithm,then the variation of the relative posture is obtained. The result is shown in Fig. 12(a),in which the horizontal coordinate stands for the image number,and the vertical coordinate stands for the variation of relative pose between adjacent images.

    Part of the images

    Figure 11.Part of the images

    Experiment results

    Figure 12.Experiment results

    Next,the accuracy of the position calculated by the IOI algorithm was studied in a similar way. An image of the satellite model is taken at an arbitrary position,which was recorded as the 0th image,and then the satellite model was moved along the one-dimensional displacement table(x-axis of the experimental system)while 20 images of the satellite model were taken,and the real relative position between the satellite model and the camera of each shooting time are obtained by the INS. The increase of the translation between adjacent position is 5 mm for the first 10 images and 10 mm for the last 10 images,and the relative position between the satellite model and the camera at each posture is calculated by the IOI algorithm. By adjusting the orientation of the satellite model on the displacement table,the displacement of the satellite model in the other two directions can be simulated. Similarly,the variation of the relative position in the other two directions is obtained. The result is shown in Fig. 12(b),in which the horizontal coordinate stands for the image number,and the vertical coordinate stands for the variation of relative position between adjacent images.

    Table 8 shows the detailed experimental results and the root mean square error of the pose calculated by IOI algorithm. It shows that the maximal root mean square error of the attitude angle is 18.96″(5.266 7×10-30),and the maximal root mean square error of the position is 0.059 mm,in which the root mean square error of the position in the direction along the optical axis is the largest. In general,the accuracy of the IOI algorithm can meet the requirements of most space pose measurement tasks.

    Table Infomation Is Not Enable

    Verification experiment for the run time of the IOI algorithm

    In this Section,the running time of the IOI algorithm is investigated by physical experiment. Four corner points of the solar panel of the satellite model are selected as the space feature points. The red dots in Fig. 13 are the extracted image points of the feature points,and the blue crosses are the reprojected points of the feature points,which are reprojected with the pose calculated by the IOI algorithm. As shown in Fig. 13,the reprojected points overlap well with the extracted image points.

    Feature points and the reprojection result

    Figure 13.Feature points and the reprojection result

    Table 9 shows the comparison of different algorithms. The reprojection residuals refer to the standard deviations of the reprojection residuals of all four feature points. For the IOI algorithm,the parallel perspective model is used instead of the weak perspective model to initialize the rotation matrix,thus the reprojection residuals of the IOI algorithm is smaller than that of OI algorithm. However,the run time of the IOI algorithm is much shorter than that of the traditional OI algorithm. In addition,the performance of the IOI algorithm is compared with ZHOU Run's improved OI algorithm28. In ZHOU Run's algorithm,the weighted collinear errors are taken as the objective function. In each iteration,the weight coefficients are determined according to the re-projection errors in image,and the camera pose estimation results are optimized by the coefficients. As shown in Table 9,the reprojection residuals of the ZHOU Run's algorithm is slightly smaller than that of IOI algorithm,but the run time of ZHOU Run 's algorithm is much longer than that of IOI algorithm.

    Table Infomation Is Not Enable

    It should be noticed that the run time in Table 9 only refers to the time spent in process of solving the pose parameters,excluding the time spent in the process of obtaining the coordinate of the space feature points in the target coordinate system,image transmission,image processing,extraction of the image points and so on. In real space tasks,it generally takes longer to get the pose information of the target.

    Conclusion

    Based on the traditional OI algorithm,an IOI algorithm is proposed. Firstly,the parallel perspective model is used instead of the weak perspective model to initialize the rotation matrix,and the traditional OI algorithm is accelerated by eliminating the translation vectors in the iterative process. Then simulation experiments are conducted to study the influence of the extraction accuracy of the imaging point,the accuracy of 3D coordinate of the feature point,the calibration accuracy of the camera principal point,the calibration accuracy of the camera focal length and the number of feature points on the accuracy of the IOI algorithm. Based on the results of the simulation experiments,Taguchi method is used to quantitatively analyze the influence of each factor on the accuracy of the IOI algorithm,and find out the factors that have the greatest influence on the results of IOI algorithm. Finally,the performance of the proposed IOI algorithm is tested by the physical experiments. The results of the simulation and physical experiments show that IOI algorithm has high accuracy,and the run time of the IOI algorithm is much shorter than that of the traditional OI algorithm. Since the proposed IOI algorithm has the advantages such as simple system,low cost,high accuracy and shorter run time,the proposed IOI algorithm has realistic significance for various space tasks like space rendezvous-docking and on-orbit maintenance.

    References

    [1] W RANDAL, BEARD , Y JONATHAN L FRED et al. A coordination architecture for spacecraft formation control. IEEE Transactions on Control Systems Technology, 9, 777-790(2001).

    [2] N PHILIP, M ANANTHASAYANAM. Relative position and attitude estimation and control schemes for the final phase of an autonomous docking mission of spacecraft. Acta Astronautica, 52, 511-522(2003).

    [3] M FISCHLERAND, R BOLLES. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24, 381-385(1981).

    [4] Dabao LAO, Huijuan ZHANU, Zhi XIONG et al. Automatic measurement method of attitude based on monocular vision. Acta Photonica Sinica, 48, 0315001(2019).

    [5] Limin ZHANG, Feng ZHU, Yingming HAO et al. Pose measurement based on a circle and a non-coplanar feature point. Acta Photonica Sinica, 44, 1112002(2019).

    [6] Ju HU, Jiashan CUI, Weixing WANG. Error analysis of monocular visual position measurement based on coplanar feature points. Acta Photonica Sinica, 43, 0512003(2014).

    [7] Y ABDEL, H KARARA. Direct linear transformation into object space coordinates in close-range photogrammetry, 1-18(1971).

    [8] D DEMENTHON, L DAVIS. Model-based object pose in 25 lines of codes. International Journal of Computer Vision, 15, 123-141(1995).

    [9] C LU, G HAGER, E MJOLSNESS. Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 610-622(2000).

    [10] G TAGUCHI, A ELSAYED, CH THOMAS. Quality engineering in production systems(1989).

    [11] Wenbing WAND, Zhongning GUO, Zhaoqin YU et al. Sodium alginate solution transferred by laser induced forward transfer. Acta Photonica Sinica, 48, 0114002(2019).

    [12] G ZHANG, H ZHAO, Y CHEN et al. Optimization thermal design method for space cameras based on thermo-optical analysis and Taguchi method. Optical Engineering, 59, 075101(2020).

    [13] T HUANG, R ZHAO, L BI et al. Neural embedding singular value decomposition for collaborative filtering. IEEE Transactions on Neural Networks and Learning Systems.

    [14] H LI, T LIU, X WU et al. A bearing fault diagnosis method based on enhanced singular value decomposition. IEEE Transactions on Industrial Informatics, 17, 3220-3230(2021).

    [15] Guangjun ZHANG. Machine vision(2005).

    [16] A OLIVEIRA, B FERREIRA, N CRUZ. A performance analysis of feature extraction algorithms for acoustic image-based underwater navigation. Journal of Marine Science and Engineering, 9, 361(2021).

    [17] J HAFEEZ, J LEE, S KWON et al. Evaluating feature extraction methods with synthetic noise patterns for image-based modelling of texture-less objects. Remote Sensing, 12, 3886(2020).

    [18] Z HAO, X WANG, S ZHENG. Recognition of basketball players' action detection based on visual image and harris corner extraction algorithm. Journal of Intelligent & Fuzzy Systems, 40, 7589-7599(2021).

    [19] G XU, F CHEN, X LI et al. Closed-loop solution method of active vision reconstruction via a 3D reference and an external camera. Applied Optics, 58, 8092-8100(2019).

    [20] G ZHANG, H ZHAO, H YANG et al. Robust and flexible method for calibrating the focal length of on-orbit space zoom camera. Applied Optics, 58, 1467-1474(2019).

    [21] G ZHANG, H ZHAO, G ZHANG et al. Improved genetic algorithm for intrinsic parameters estimation of on-orbit space cameras. Optics Communications, 475, 126235(2020).

    [22] Chao ZHANG, Huamin YANU, Chen HAN et al. Multi-camera calibration based on vanishing point constraint. Acta Photonica Sinica, 45, 0521004(2016).

    [23] Guohui WANG, Kemao QIAN. Review on line-scan camera calibration methods. Acta Optica Sinica, 40, 0111011(2020).

    [24] Pengpeng ZOU, Zili ZHANG, Ping WANG et al. Binocular camera calibration based on collinear vector and plane homography. Acta Optica Sinica, 37, 1115006(2017).

    [25] Junpeng XUE, Xianyu SU. Camera calibration with single image based on two orthogonal one-dimensional objects. Acta Optica Sinica, 32, 0115001(2012).

    [26] Qiang XIAO, Gang CHEN. Effect of cluster magnetorheological finishing parameters on subsurface damage depth. Acta Photonica Sinica, 47, 0124001(2018).

    [27] Xinyi ZHENG, Zhenyuan JIA, Xiaotao REN et al. Analysis on effect of discharge parameters on cylindricity of small holes by orthogonal experiments. Optics and Precision Engineering, 18, 426-433(2010).

    [28] Run ZHOU, Zhengyu ZHANG, Xuhui HUANG. Weighted orthogonal iteration algorithm for camera pose estimation. Acta Optica Sinica, 38, 0515002(2018).

    Yongying DONG, Gaopeng ZHANG, Sansan CHANG, Zhi ZHANG, Yanjie LI. A Pose Measurement Algorithm of Space Target Based on Monocular Vision and Accuracy Analysis[J]. Acta Photonica Sinica, 2021, 50(11): 1112003
    Download Citation