• Chinese Optics Letters
  • Vol. 13, Issue 12, 121501 (2015)
Fuqiang Zhou*, Xinghua Chai, Tao Ye, and Xin Chen
Author Affiliations
  • Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang University, Beijing 100191, China
  • show less
    DOI: 10.3788/COL201513.121501 Cite this Article Set citation alerts
    Fuqiang Zhou, Xinghua Chai, Tao Ye, Xin Chen. Two-dimensional vision measurement approach based on local sub-plane mapping[J]. Chinese Optics Letters, 2015, 13(12): 121501 Copy Citation Text show less

    Abstract

    The existing two-dimensional vision measurement methods ignore lens distortion, require the plane to be perpendicular to the optical axis, and demand a complex operation. To address these issues, a new approach based on local sub-plane mapping is presented. The plane calibration is performed by dividing the calibration plane into sub-planes, and there exists an approximate affine invariance between each small sub-plane and the corresponding image plane. Thus, the coordinate transformation can be performed precisely, without lens distortion correction. The real comparative experiments show that the proposed approach is robust and yields a higher accuracy than the traditional methods.

    Vision measurement technology is a new technology based on machine vision[13], which focuses on measuring an object’s geometric size, shape, space position, attitude, etc[49]. There are a variety of classifications within vision measurement, including monocular two-dimensional (2D) vision measurement[6], binocular stereo vision, and multi-vision measurement[2,4]. Monocular vision measurement employs a camera for video measurement or photogrammetry in a 2D plane. Because this method requires only one vision sensor, it has the advantages of a simple structure and calibration, and it can also avoid the small view field of three-dimensional vision and stereo matching. Research into monocular vision measurement has been quite active in recent years[1014].

    Plane calibration and coordinate calculation are essential in 2D vision measurement. The sub-pixel method is employed to determine the coordinates of specimen mark points or other geometric features in the calibration plane, and the coordinates can be calculated in the longitudinal and transverse directions simultaneously[15]. According to the changes of the mark point coordinates and the pinhole model[14], the measured geometric parameters can be calculated, such as the displacement, deformation, speed, etc. The view range is mainly determined by the focal length of the lens. By equipping lenses with different focal lengths, various view ranges can be obtained. The video extensometer is a typical application of 2D vision measurement for detecting the tensile deformation of materials, as shown in Fig. 1.

    Video extensometer principle based on 2D vision measurement.

    Figure 1.Video extensometer principle based on 2D vision measurement.

    Usually, obvious round marks (although these are sometimes linear marks or random speckles) are made at both ends of the specimen. The marks project to the CCD; when the specimen is deformed, the imaging on the CCD changes accordingly. The real-time image data are input to the computer. The deformation can be calculated according to the imaging model through image processing and the location of the central coordinates. The conventional geometric similarity model is expressed as follows: wcosθ/q=h/p=l/f,where l and f are measurement distance and lens focal length, θ is the angle between the CCD image plane and the measurement plane, and h, w and p, q are the intervals measured and the image size, respectively.

    The traditional 2D vision measurement method is based on the geometric similarity of the measurement plane and the image plane[10,15]. The specimen surface is vertical to the optical axis and parallel to the image plane. According to the pinhole model[14], the object and the image satisfy a similarity relation, and the actual geometric parameters of the specimen can be calculated by multiplying the parameters extracted from the image by the actual magnification. However, because of lens distortion and the difficulty of ensuring the verticality of the measurement plane and the optical axis[16,17], the image and object planes are not strictly consistent with the pinhole model, which leads to low accuracy. In cases requiring high accuracy, the distorted image must first be corrected[16,18], and then the image coordinates are extracted to calculate the actual parameters. But this method can only measure objects in the plane perpendicular to the optical axis, which is difficult to achieve in a practical operation, and a time-consuming calculation is required, which is not suitable for real-time measurements. The high-precision standard grid method requires specimens with specific surfaces to print the chessboard grid[19]. When the specimen generates tension, mobility, or other changes, the grid is affected. According to the grid deformation information, the geometric parameters can be calculated[20,21]. However, this requires a complicated operation, as each measurement demands that the specimen is printed with standard grids, and it cannot measure filaments and other small materials.

    We propose a measurement approach based on local sub-plane mapping to establish a precise projection relationship between the image coordinates and real-world coordinates, which has the advantages of high robustness, the ability to measure a plane without a vertical axis, almost no restrictions regarding the specimen material, and no need for a distortion correction. A diagram of this method is shown in Fig. 2.

    Process diagram of sub-planes: (a) calibration target with 6×22 point fixed to coincide with the specimen surface, (b) extract and save sub-pixel coordinates of every reference point from the target image, and (c) establish the mapping model between each image sub-plane and measurement plane.

    Figure 2.Process diagram of sub-planes: (a) calibration target with 6×22 point fixed to coincide with the specimen surface, (b) extract and save sub-pixel coordinates of every reference point from the target image, and (c) establish the mapping model between each image sub-plane and measurement plane.

    As show in Fig. 2(a), a calibration target is needed to segment the sub-plane size and number accurately[22]; it is fixed to coincide with the specimen’s surface in measurement plane. Then, the image is captured and processed to extract the sub-pixel coordinates of every reference point, as shown in Fig. 2(b). The grid array composed by these coordinates is saved in the computer for further locating. Thus, the image plane is divided into a series of sub-planes, and the mapping model is established between each image sub-plane and measurement plane, as shown in Fig. 2(c). The details of this mapping process are shown in Fig. 3.

    Schematic of coordinate location method in sub-plane.

    Figure 3.Schematic of coordinate location method in sub-plane.

    Thus, the image grids array can locate the measurement point according to the surrounding reference points extracted from the calibration plane. Figure 3 shows that the point P on the specimen is found to be surrounded by ABCD from the image grid array and ABCD from the calibration target. In a small area, there exists an approximate affine invariance, as presented in{SΔAPC/SABCD=(SΔAPC/SABCD)/cosθSΔAPB/SABCD=SΔAPB/SABCD,where SΔ and S represent the areas of the triangle and the square, respectively, calculated by Heron’s formula. We obtain the standard distance from each axis: {dx=(2dSΔAPC/SABCD)/cosθdy=2dSΔAPB/SABCD.

    As long as the interval d is known, the position of the tested point P(x,y) can be determined. The normalized coordinates in the standard grid coordinates system are expressed inP((m+dx)d/cosθ,(n+dy)d).

    Assuming that the image coordinates of ABCD and P are (xA,yA), (xB,yB), (xC,yC), (xD,yD), and (x,y), respectively, the transformation relationship between the image coordinates and practical geometric coordinates P(x,y) is calculated using the following expression: {xd(m+2×|detAAPC||detAABC|+|detADBC|)/cosθyd(n+2×|detAAPB||detAABC|+|detADBC|),where m and n are the numbers of intervals on the chessboard from the origin, d is the calibration target gauge length, θ is the angle between the measurement plane and the image plane, as shown in Fig. 1, and |detA| represents the absolute value of the area matrix A determinant |A|. This method avoids complex modeling. The impact of lens distortion can be suppressed. Its measurement accuracy and range largely depends on the calibration target accuracy and sub-planes area, respectively[23,24].

    As shown in Fig. 4, because of lens distortion, the measurement plane and image plane are not strictly consistent with the pinhole model[1618]. In order to demonstrate that the lens distortion effect is suppressed without loss of generality, we establish the image coordinate system with the origin as the center point of distortion. The undistorted coordinates of the four sub-plane vertexes surrounding point P(x,y) are A(xA,yA), B(xA+L,yA), C(xA,yA+L), and D(xA+L,yA+L) for a measurement angle of θ=0 and a sub-plane side length L. The distortion point A(xA,yA) and the corresponding point A(xA,yA) are related as follows: {xA=xARA+[2p1xAyA+p2(rA2+2xA2)]yA=yARA+[2p2xAyA+p1(rA2+2yA2)],where RA=1+k1rA2+k2rA4+k3rA6, and rA=[xA2+yA2]1/2, k1, k2, and k3 are the coefficients of the radial distortion, and p1 and p2 are the coefficients of the tangential distortion[25]. The other points also satisfy this distortion relationship. Thus, the areas of SΔ and S shown in Fig. 4 can be calculated using the coordinates of ABCD, ABCD, P, and P. The real transverse and longitudinal deviation of P(x,y) and P(x,y) are then given by the following expressions: {Δx=xx=(2SΔAPCSABCDxxAxBxA)dΔy=yy=(2SΔAPBSABCDyyAyCyA)d.

    Effects of the lens distortion: (a) real chessboard image and image with lens distortion and (b) pixel deviations before and after image distortion.

    Figure 4.Effects of the lens distortion: (a) real chessboard image and image with lens distortion and (b) pixel deviations before and after image distortion.

    It is reasonable to adopt this analysis to assess the performance of the measurement method rather than that of the feature-extraction algorithm, which is used to localize the distorted image points. We know the distortion coefficient of the lens and know that the measurement view range is 400mm×400mm, which is larger than that for the real measurement. The computer-simulated deviation data between the undistorted and measured coordinates are presented in Fig. 5.

    Relationship between sub-planes interval and mapping accuracy, obtained using lens with first distortion coefficient k1=4.1×10−8: (a) fixed interval deviation distribution in whole view range 400 mm×400 mm and (b) relationship between maximum deviation and sub-planes interval.

    Figure 5.Relationship between sub-planes interval and mapping accuracy, obtained using lens with first distortion coefficient k1=4.1×108: (a) fixed interval deviation distribution in whole view range 400mm×400mm and (b) relationship between maximum deviation and sub-planes interval.

    Figure 5(a) displays the deviation distribution over the entire view range when the sub-plane interval was fixed as 6 mm (equal to the calibration target gauge length in the real experiment). We see that the maximum deviation obtained by the local sub-plane mapping of the 2D vision measurement was 2.4 μm when the view range was 400mm×400mm. Figure 5(b) shows the relationship between the maximum deviation value and the sub-plane interval in the same view range. Thus, when designing calibration target, an appropriate gauge length can be chosen according to the accuracy requirements.

    In the measurement process, the three fundamental planes are the calibration plane, the measurement plane, and the image plane. The specimen is loaded almost coincident with the calibration plane, in accordance with the mechanical constraints to reduce the out-plane effects. But since the out-plane effects still exist, it is necessary to analyze the out-plane deviation, as shown in Fig. 6.

    Principle of the out-plane effects.

    Figure 6.Principle of the out-plane effects.

    According to the pinhole model, the relationship between the measurement plane and the image plane can be expressed by the following equation: d/f=h/l,where d and f are the measurement distance and the lens focal length, respectively. h is the distance measured, and l is the size of the imaging distance. Thus, the deviation Δh caused by the out-plane displacement Δd can be expressed as Δh=hh=(l×Δd)/f,where h and l are the actual measurement distance and image size, respectively. The relative measurement accuracy can be expressed by the following equation: δ=Δh/h=Δd/(dΔd).

    In actual applications, the relative accuracy of 2D vision measurement usually is no more than 5%, which can be satisfied by designing the appropriate measurement distance and the precision of the mechanical constraints and compensating for the specimen’s thickness.

    The proposed method was tested in real experiments and compared with conventional 2D vision measurement methods, including the geometric similarity method and digital image correlation (DIC). As a classic microscopic measurement method for in-plane displacement and deformation, the DIC has an important application in 2D vision measurement[26]. Though the local sub-plane mapping method has a large measurement range and the DIC method is too time-consuming to suit that, the comparison between the two methods was performed over an approximately 1 mm range. In the experiments, the calibration target was a chessboard pattern with 6×22 circular points distributed evenly, where the distance between the adjacent points was 6 mm. The lens was a PenTax H1214-M made in Japan, with a horizontal view angle of 28.91° and a 12 mm focal length, and a PointGrey Flea3 camera made in Canada with 1/2.8” CCD was used. The image resolution was 1920 pixels × 2416 pixels, whose 1/3 part of 640 pixels × 2416 pixels was used for the experiments. The measurement distance was 45 cm and the accuracy of sub-pixel location usually could achieve 0.02 pixels[27]. Thus, the accuracy of the sub-pixel location was 1.25 μm. The circular mark was approximately 6 mm in diameter, and the size of the DIC subset was 81 pixels × 81 pixels. The images for calibration and measurement were captured from a random orientation, and the position of the CCD camera was fixed throughout.

    First, the plane calibration was performed using two images of the calibration target. One image of the calibration target was captured and stored. The coordinates array of the reference grids could be extracted through image processing methods such as image filtering, edge detection, and feature extraction. Then, another image was captured and processed according the former process, whose grid coordinates could be normalized according to the reference grid array by Eq. (4). Thus, the intervals of the adjacent points of the second image were calculated, and the calibration accuracy was expressed by the average, minimum, and maximum errors between the actual intervals and the gauge length of the calibration target. In the measurement process, a series of images of the specimen with circular marks were captured and processed in real time. If the marks were in the calibrated area, the real-time normalized coordinates of the specimen marks could be calculated and the position of each mark was located. The entire measurement process is shown in Fig. 7.

    Process of measurement experiment: (a) calibration target fixed on stretching machine, (b) image for calibration, (c) matrix of circular center, (d) displacement measurement using two methods, (e) 81 pixels × 81 pixels correlation subset, and (f) gauge length measurement.

    Figure 7.Process of measurement experiment: (a) calibration target fixed on stretching machine, (b) image for calibration, (c) matrix of circular center, (d) displacement measurement using two methods, (e) 81 pixels × 81 pixels correlation subset, and (f) gauge length measurement.

    To practically determine the measurement accuracy, we performed two comparison tests. The first was round-mark point-displacement test. It used a high-precision stretching machine to load the displacement, and the measurement data was compared with the results measured by the DIC method and the displacement data from the stretching machine. The second was a gauge length measurement experiment, where the interval between two adjacent round mark points was known, and the measurement data of the distance was compared with the standard length data.

    The calibration results obtained using different methods are compared in Fig. 8 and Table 1. The measurement results are compared in Tables 2 and 3. Clearly, the calibration methods greatly influence the measurement accuracy, and the local sub-plane mapping method has better accuracy and stability. Tables 2 and 3 present comparative results for the real-time measurement of the random displacement and gauge length. The real data for the random displacement were captured by the stretching machine, and that for the gauge length were obtained using the calibration target. Two kinds of contrast indicate that the accuracy of this mapping method is better than an absolute accuracy of 3 μm in a small measurement range, and a relative accuracy of 1% in a large measurement range, which is better than that of the conventional geometric similarity method and is comparable to the DIC method.

    Comparative results for calibration accuracy using the two methods.

    Figure 8.Comparative results for calibration accuracy using the two methods.

    Calibration AccuracyNumberMaximum (μm)Minimum (μm)Average (μm)Maximum (%)Minimum (%)
    Local Sub-PlaneX1361.940.001.000.320.00
    Mapping MethodY1362.160.001.000.360.00
    Geometric SimilarityX13628.080.1212.224.680.02
    MethodY13627.620.1513.074.600.03

    Table 1. Comparison of Calibration Accuracy

    Displacement Measurement/μmData Comparison
    Proposed Method1032173544966157499051103
    DIC Method1022173595016167509091104
    Real Data103.7218.2356.1497.8614.3747.7907.11101.5

    Table 2. Comparison of Displacement Measurement Accuracy

    Gauge Length Measurement/μmData Comparison
    Proposed Method59981199518008240143599159,98484,017
    Real Data60001200018000240003600060,00084,000

    Table 3. Comparison of Gauge Length Measurement Accuracy

    In conclusion, we propose a precise measurement approach for 2D vision measurement. Until now, the conventional geometric similarity methods based on whole-plane mapping have led to decreased accuracy because of lens distortion. Moreover, the condition that the measured object surface is vertical to the optical axis of the camera system and parallel to the image plane is unrealistic. To overcome these shortcomings, the local sub-plane mapping method is presented for the first time, with the aim of establishing a precise transformation between the image point and the real-world point in a 2D plane. Additionally, the local sub-plane mapping model and simple algorithm for coordinate location are combined to improve the measurement accuracy and efficiency. To validate the performance of the proposed method, we performed experiments using synthetic and real data and compared the results with those for the conventional method. The results from the computer simulation and real data demonstrate the advantage of the proposed approach.

    References

    [1] S. Shirmohammadi, A. Ferrero. IEEE Instru. Meas. Mag., 17, 41(2014).

    [2] F. Zhou, Y. Wang, B. Peng, Y. Cui. Meas. J. Int. Meas. Confed., 46, 1147(2013).

    [3] Z. Y. Zhang. IEEE Trans. Pattern Anal. Mach. Intell., 22, 1330(2000).

    [4] Z. Ren, J. Liao, L. Cai. Appl. Opt., 49, 1789(2010).

    [5] R. Jachyma, K. Kwiecińskia. Weld. Int., 28, 39(2014).

    [6] Y. Yin, D. Xu, Z. Zhang, X. Wang, W. Qu. J. Elec. Meas. Instrum., 27, 347(2013).

    [7] F. Zhou, E. Zappa, L. C. Chen. Adv. Mech. Eng., 947610(2014).

    [8] B. Wu, Y. Zhang. Adv. Mech. Eng., 5, 587904(2013).

    [9] H. Jiang, H. Zhao, X. Li. Opt. Lasers Eng., 50, 1484(2012).

    [10] L. Xu, X. Li, X. Lv. Mech. Eng. Autom., 1, 73(2006).

    [11] W. Zhang, H. Bai. Phys. Test. Chem. Anal., 48, 174(2012).

    [12] J. Zhang, G. C. Jin, L. B. Meng, L. H. Jian, A. Y. Wang, S. B. Lu. J. Biom. Opt., 10, 034021(2005).

    [13] S. Yoneyama, Y. Morimoto. JSME Int. J., 46, 178(2003).

    [14] R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision(2003).

    [15] Q. Tian, Z. Sun, Z. Le, Y. Liu, L. Zhang, S. Xie. Opt. Eng., 53, 122412(2014).

    [16] F. Zhou, Y. Cui, H. Gao, Y. Wang. Opt. Lasers Eng., 51, 1332(2013).

    [17] T. Chen, Z. Ma, P. Li, J. Nie. Control Decis., 7, 243(2012).

    [18] J. Wang, F. Shi, J. Zhang, Y. Liu. Pattern Recognit., 41, 607(2008).

    [19] Y. Men, X. Lv, X. Li. Forg. Stamp. Tec., 32, 89(2007).

    [20] G. Tzimiropoulos, V. Argyriou, T. Stathaki. IEEE Trans. Image Process., 20, 1761(2011).

    [21] R. Roncella, E. Romeo, L. Barazzetti, M. Gianinetto, M. Scaioni, 721(2012).

    [22] J. Heikkila. IEEE Trans. Pattern Anal. Mach. Intell., 22, 1066(2000).

    [23] R. R. Boye, C. L. Nelson. Proc. SPIE, 7246, 72460X(2009).

    [24] D. Gao, Y. Wang, C. Zhou, Z. Xu, 2, 238(2012).

    [25] B. Prescott, G. F. Mclean. Graph. Models, 59, 39(1997).

    [26] H. Schreier, J. J. Orteu, M. A. Sutton. Image Correlation for Shape, Motion and Deformation Measurements, 565-600(2011).

    [27] Z. Z. Wei, M. Gao, G. J. Zhang, Z. Liu. Opto-Elec. Eng., 36, 7(2009).

    Fuqiang Zhou, Xinghua Chai, Tao Ye, Xin Chen. Two-dimensional vision measurement approach based on local sub-plane mapping[J]. Chinese Optics Letters, 2015, 13(12): 121501
    Download Citation