• Chinese Optics Letters
  • Vol. 20, Issue 6, 061101 (2022)
Zeren Gao, Yong Su, and Qingchuan Zhang*
Author Affiliations
  • Key Laboratory of Mechanical Behavior and Design of Materials of Chinese Academy of Sciences, University of Science and Technology of China, Hefei 230027, China
  • show less
    DOI: 10.3788/COL202220.061101 Cite this Article Set citation alerts
    Zeren Gao, Yong Su, Qingchuan Zhang. Single-event-camera-based 3D trajectory measurement method for high-speed moving targets[J]. Chinese Optics Letters, 2022, 20(6): 061101 Copy Citation Text show less

    Abstract

    High-speed target three-dimensional (3D) trajectory and velocity measurement methods have important uses in many fields, including explosive debris and rotating specimen trajectory tracking. The conventional approach uses a binocular system with two high-speed cameras to capture the target’s 3D motion information. Hardware cost for the conventional approach is high, and accurately triggering several high-speed cameras is difficult. Event-based cameras have recently received considerable attention due to advantages in dynamic range, temporal resolution, and power consumption. To address problems of camera synchronization difficulties, data redundancy, and motion blur in high-speed target 3D trajectory measurement, this Letter proposes a 3D trajectory measurement method based on a single-event camera and a four-mirror adaptor. The 3D trajectory and velocity of a particle flight process and a marker on a rotating disc were measured with the proposed method, and the results show that the proposed method can monitor the operational state of high-speed flying and rotating objects at a very low hardware cost.

    1. Introduction

    Vision-based three-dimensional (3D) trajectory and velocity measurements of high-speed moving targets have an important role in structural health monitoring[1], robotics[2,3], object tracking[4,5], experimental mechanics[6,7], and other fields. Since trajectory and velocity calculations require finding corresponding features between adjacent frames of a continuous image sequence, their accuracy depends heavily on the frame rate of the camera[8]. High-speed cameras have disadvantages such as high-transmission bandwidth requirements, high cost, and short recording time. As frame-based cameras record the movement of an object over a period of exposure time, if the exposure time is too long relative to the speed of the target’s movement, the image will have a motion blur effect, which needs to be reduced by using a very short exposure time and a flash synchronized with the camera[1,6,7]. For the measurement of high-speed rotating objects, researchers have relied on optical de-rotator to eliminate motion blur caused by high-speed rotation[9], but this approach requires high hardware requirement and requires precise co-axiality between the rotational axis of the optical de-rotator and the rotating object, as well as precise matching of their rotational speeds[10,11]. When 3D information about the object needs to be measured, multiple high-speed cameras are required to form stereo vision[12,13]. Time synchronization between multiple high-speed cameras is a challenging task. Wang et al. used a galvanometer scanning system to generate a reference signal to correct the camera start-up time difference[13]. Yu et al. used a single high-speed camera combined with reflectors or prisms to form stereo vision[14,15], which avoids the problem of temporal synchronization between multiple cameras.

    Conventional cameras use frame-based imaging sensors such as complementary metal oxide semiconductors (CMOS) and charge-coupled devices (CCD). A frame-based sensor has the same exposure time for all pixels at the same moment (global shutter) or over a period of time (roll-up shutter) to produce a frame of the entire field of view. However, this frame-based imaging data is very inefficient and redundant in target tracking tasks, where we are only interested in the pixels where the moving target is located. Event camera is an emerging vision sensor that generates ‘events’ by asynchronously detecting the change of illumination intensity at each pixel. The event camera outputs pixel coordinates and time when there is a change in illumination intensity of a pixel that exceeds a set threshold. The output is with low redundancy as data is only generated at the pixel locations where the illumination intensity changes. The advent of event cameras dates back to 1992 when Mahowald[16] proposed a new type of vision sensor called the ‘Silicon Retina.’ In the following decades, more and more commercial event cameras have been developed[17]. Among them, three types of cameras are widely accepted and used: dynamic vision sensor (DVS), asynchronous time-based image sensor (ATIS), and dynamic and active pixel vision sensor (DAVIS). DVS is an event-only camera, whereas ATIS and DAVIS can output events as well as gray-scale information. Figure 1 shows a comparison of the principle of imaging a moving object with an event camera and a frame-based camera. Compared with the data recorded by the frame-based camera, the data recorded by the event camera has better continuity in the time axis and less data redundancy. Characteristics of event cameras make them highly advantageous for target tracking and have gained a lot of attention in recent years. Saner et al. used one event camera for two-dimensional (2D) tracking of high-speed targets[18], and Zhou et al. utilized two event cameras to form stereo vision for 3D reconstruction of objects[19], with the problem of temporal synchronization when using the multi-event camera approach.

    Free fall of a small ball photographed by two different types of cameras.

    Figure 1.Free fall of a small ball photographed by two different types of cameras.

    To address problems of camera synchronization difficulties, data redundancy, and motion blur in high-speed target 3D trajectory measurement, in this work, we propose a single-event-camera stereo vision method using a four-mirror adaptor. We have used the proposed method to measure the 3D trajectory and velocity of a particle flight process, and the results show that the method is very suitable. In addition, we have applied the proposed method to the measurement of rotating objects, and the results show that the proposed method can monitor the operational state of high-speed rotating objects at a very low hardware cost. The proposed method provides a new approach for vibration measurement of rotating objects and external monitoring of rotational angles[3].

    2. Principle of Single-Event-Camera Stereo Vision

    The event camera used in this Letter is the CeleX5-MP (1280×800 pixels) from CelePixel Inc., which has various modes such as gray scale, event, and optical flow. The gray-scale mode is the same as a conventional camera, which allows us to calibrate the imaging parameters using conventional calibration methods. A single-view image provides only 2D information about the object’s motion and requires the object’s trajectory to be perpendicular to the camera’s optical axis, which is a great limitation of the application range of event cameras. In order to obtain the 3D trajectory of an object, multi-view measurements are required. Accurate temporal synchronization between multiple event cameras is very difficult, so we used a four-mirror-based monocular stereo vision approach[20]. As shown in Fig. 2, four mirrors are mounted in front of the lens to divide the image into two parts. The left and right parts of the sensor have a difference in viewing angle and form stereo vision.

    Four-mirror-based monocular stereo vision. (a) Event camera and four-mirror adaptor. (b) Light paths for four-mirror-based monocular stereo vision.

    Figure 2.Four-mirror-based monocular stereo vision. (a) Event camera and four-mirror adaptor. (b) Light paths for four-mirror-based monocular stereo vision.

    Before the 3D reconstruction[2123], the stereo vision system needs to be calibrated. The process of obtaining the imaging parameters and the relative position relationship between the two views of stereo vision is called stereo calibration. We use the pinhole imaging model to describe the projection of a point in the world coordinate system onto the camera target surface, expressed in homogeneous coordinates as α[xiyi1]=[fxfscx0fycy001][R11R12R13TxR21R22R23TyR31R32R33Tz][XwYwZw1],where (cx,cy) is a principal point, usually the image center; (fx,fy) represents the image distance expressed in pixels; fs is the non-perpendicular angle for the camera sensor array; α is a nonzero scale factor; R|T is the transformation between the world and principal point coordinate systems; (Xw,Yw,Zw)T are the coordinates for a point in the world coordinate system, and (xi,yi)T are the coordinates for the projection point in pixels. We define the transformation from the left to right virtual camera principal point coordinate system as Rh|Th [Fig. 3(a)]. We used the plane model calibration method to obtain Rh|Th, which requires the monocular stereo vision system to take a set of calibration board images [Fig. 3(b)]. The world coordinate system (Xw,Yw,Zw)T was defined to coincide with the principal point coordinate system for the left view of the monocular stereo vision system. Therefore, R becomes a unit matrix and T a zero vector for the left virtual camera in Eq. (1), and R|T is the transformation from the principal coordinate system of the left virtual camera to that of the right virtual camera. Thus, Eq. (1) can be simplified to {xil=cxl+fxlXwZw+fslYwZwyil=cyl+fylYwZwxir=cxr+fxr(R11hXw+R12hYw+R13hZw+txh)R31hXw+R32hYw+R33hZw+tzh+fsr(R21hXw+R22hYw+R23hZw+tyh)R31hXw+R32hYw+R33hZw+tzhyir=cyr+fyr(R21hXw+R22hYw+R23hZw+tyh)R31hXw+R32hYw+R33hZw+tzh.

    Principle of stereo vision 3D reconstruction. (a) Geometric relationships in stereo visual 3D reconstruction. P is a point in the world coordinate system, and Pl and Pr represent the pixel coordinates of the P point projected to the left and right views, respectively. (b) Image of a calibration board taken by the monocular stereo vision system in gray-scale mode.

    Figure 3.Principle of stereo vision 3D reconstruction. (a) Geometric relationships in stereo visual 3D reconstruction. P is a point in the world coordinate system, and Pl and Pr represent the pixel coordinates of the P point projected to the left and right views, respectively. (b) Image of a calibration board taken by the monocular stereo vision system in gray-scale mode.

    Solving Eq. (2) provides the world coordinates for points to be measured from pixel coordinates in the left and right views of the monocular stereo vision system.

    Since we can obtain the 3D coordinates of the object to be measured at each moment, we can obtain the displacement vector by making the difference between the coordinates of adjacent moments. The derivative of the displacement vector with respect to time can obtain the velocity vector of the object to be measured at this time, and the mode of the velocity vector is the velocity at this time. The proposed measurement system is, in principle, a conventional stereo vision method, but the camera is replaced by an event camera, and the estimation of the measurement accuracy of in-plane and out-of-plane displacements follows the estimation method of the conventional stereo vision.

    3. Experimental Results

    The event camera used in this Letter costs about 16,000 renminbi (RMB) and has a temporal resolution of 105s. Conventional high-speed cameras with the same time resolution usually cost more than 500,000 RMB. We utilize the 3D measurement capability of the proposed measurement method to measure the 3D trajectory and velocity of a small steel ball hitting the wall and rebounding (Fig. 4). Since there is no guarantee that the object’s motion trajectory is perpendicular to the camera’s optical axis, single-view measurement cannot deal with this situation. We can observe the deceleration of the small ball due to the impact and acceleration after the rebound in Fig. 4(b).

    Trajectory and velocity in 3D of a small steel ball hitting the wall and rebounding. (a) Experimental setup and size of the steel ball (8 mm; the steel ball was painted white because the background plate is black). (b) The 3D trajectory and velocity.

    Figure 4.Trajectory and velocity in 3D of a small steel ball hitting the wall and rebounding. (a) Experimental setup and size of the steel ball (8 mm; the steel ball was painted white because the background plate is black). (b) The 3D trajectory and velocity.

    Vibration measurements of rotating components have been of great interest to the engineering field[24]. In recent years, thanks to the development of the laser Doppler vibrometer (LDV), researchers can use an optical de-rotator combined with scanning LDV to obtain surface vibrations of rotating components[10], but both the optical de-rotator and scanning LDVs are very expensive. For in-plane deformation measurements, some researchers have used an optical de-rotator combined with a camera to measure the in-plane deformation of rotating components[11], but this method cannot measure the off-plane deformation, which is the most important part of the rotating component deformation.

    We apply the proposed method to the vibration measurement of a rotating disc. The proposed method can measure the 3D trajectory of the marker on the rotating disc, and its vibration information can be obtained by the analysis of the 3D trajectory. The experimental setup is illustrated in Fig. 5. A brushless motor for an unmanned aerial vehicle was used to drive a disc made of acrylic (thickness: 2 mm, diameter: 20 cm). The rotating disc is black with a marker dot painted in white. The marker is about 38 mm from the center of rotation. When the disc rotates, only the movement of the marker points will cause events, so the redundancy of the data will be expected to be very low. Figure 6 shows the event data due to rotation after filtering, from which it can be observed that the speed of the disc is about 75 r/s.

    Experimental setup for vibration measurement of rotating discs. (a) Experimental setup. (b) The marker dot.

    Figure 5.Experimental setup for vibration measurement of rotating discs. (a) Experimental setup. (b) The marker dot.

    Events obtained in rotating disc vibration measurement experiments. (a) Events acquired by the left virtual camera. (b) Events acquired by the right virtual camera.

    Figure 6.Events obtained in rotating disc vibration measurement experiments. (a) Events acquired by the left virtual camera. (b) Events acquired by the right virtual camera.

    Fitting the event data can obtain an expression for the pixel coordinates of the event over time. Combined with the stereo vision calibration results, the 3D coordinates of the marker point can be obtained by 3D reconstruction. From the 3D coordinate information of the marker point, we can get the trajectory and velocity of the marker point during the rotation [Fig. 7(a)]. Extracting the off-plane components of the 3D trajectory of the marker point [Fig. 7(b)] and performing Fourier analysis on them, the spectral distribution can be obtained [Fig. 7(c)]. From the spectrum results, it can be seen that the vibration of the rotating disc mainly consists of the rotational frequency as well as its harmonic frequency response.

    Results of the rotating disc vibration measurement experiments. (a) 3D trajectory and velocity of the marker point during the rotation. (b) Off-plane displacement of the marker point. (c) Spectral analysis of off-plane displacement of the marker points.

    Figure 7.Results of the rotating disc vibration measurement experiments. (a) 3D trajectory and velocity of the marker point during the rotation. (b) Off-plane displacement of the marker point. (c) Spectral analysis of off-plane displacement of the marker points.

    4. Conclusion

    In summary, we proposed a single-event-camera-based 3D trajectory measurement method for high-speed moving targets. We have used the proposed method to measure the 3D trajectory and velocity of a particle flight process, and the results show that the method is very suitable. In addition, we have applied the proposed method to the measurement of rotating objects, and the results demonstrate that the proposed method can monitor the operational state of high-speed rotating objects at a very low hardware cost. The method proposed in this Letter is intended to be an alternative to expensive high-speed cameras or scanning LDVs in some application scenarios. As demand for event cameras increases in fields such as autopilot and robotics, it is foreseen that the performance of event cameras such as resolution, signal-to-noise ratio, and bandwidth will advance rapidly. It is meaningful to investigate new event-camera-based measurement methods to achieve a breakthrough of the defects of the old methods. The application of the event camera in the field of experimental mechanics will be greatly expanded with the improvement of their performance.

    References

    [1] Z. Su, J. Pan, S. Zhang, S. Wu, Q. Yu, D. Zhang. Characterizing dynamic deformation of marine propeller blades with stroboscopic stereo digital image correlation. Mech. Syst. Signal Proc., 162, 108072(2022).

    [2] J. M. Sebastián, A. Traslosheros, L. Ángel, F. Roberti, R. Carelli. Parallel robot high speed object tracking. International Conference Image Analysis and Recognition, 295(2007).

    [3] H. Kim, Y. Yamakawa, T. Senoo, M. Ishikawa. Visual encoder: robust and precise measurement method of rotation angle via high-speed RGB vision. Opt. Express, 24, 13375(2016).

    [4] S. Wang, Y. Xu, Y. Zheng, M. Zhu, H. Yao, Z. Xiao. Tracking a golf ball with high-speed stereo vision system. IEEE Trans. Instrum. Meas., 68, 2742(2018).

    [5] Z. Liu, J. Yang. A novel video object tracking approach using bidirectional projection. Chin. Opt. Lett., 2, 390(2004).

    [6] M. Ye, J. Liang, L. Li, B. Qian, M. Ren, M. Zhang, W. Lu, Y. Zong. Full-field motion and deformation measurement of high speed rotation based on temporal phase-locking and 3D-DIC. Opt. Lasers Eng., 146, 106697(2021).

    [7] Z. Sheng, B. Chen, W. Hu, K. Yan, H. Miao, Q. Zhang, Q. Yu, Y. Fu. LDV-induced stroboscopic digital image correlation for high spatial resolution vibration measurement. Opt. Express, 29, 28134(2021).

    [8] J. Li, X. Liu, F. Liu, D. Xu, Q. Gu, I. Ishii. A hardware-oriented algorithm for ultra-high-speed object detection. IEEE Sens. J., 19, 3818(2019).

    [9] B. Altmann, C. Pape, E. Reithmeier. Temperature measurements on fast-rotating objects using a thermographic camera with an optomechanical image derotator. Proc. SPIE, 10404, 104040P(2017).

    [10] B. Altmann, T. Betker, C. Pape, E. Reithmeier. Alignment strategy for an optomechanical image derotator using a laser Doppler vibrometer. Appl. Optics, 58, 6555(2019).

    [11] Y. Yin, B. Altmann, C. Pape, E. Reithmeier. Machine-vision-guided rotation axis alignment for an optomechanical derotator. Opt. Lasers Eng., 121, 456(2019).

    [12] T. Jin, H. Jia, W. Hou, R. Yamamoto, N. Nagai, Y. Fujii, K. Maru, N. Ohta, K. Shimada. Evaluating 3D position and velocity of subject in parabolic flight experiment by use of the binocular stereo vision measurement. Chin. Opt. Lett., 8, 601(2010).

    [13] C. Wang, S. Ma, G. Liu, H. Zhu, Q. Ma. Correction of start-up time difference-induced measurement errors of a high-speed binocular stereovision system. Opt. Lasers Eng., 126, 105861(2020).

    [14] L. Yu, B. Pan. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement. Mech. Syst. Signal Proc., 94, 374(2017).

    [15] L. Yu, B. Pan. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera. Opt. Lasers Eng., 95, 17(2017).

    [16] M. Mahowald. VLSI analogs of neuronal visual processing: a synthesis of form and function(1992).

    [17] G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. O. R. Conradt, K. Daniilidis. Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell., 44, 154(2020).

    [18] D. Saner, O. Wang, S. Heinzle, Y. Pritch, A. Smolic, A. Sorkine-Hornung, M. H. Gross. High-speed object tracking using an asynchronous temporal contrast sensor. Vision, Modeling, and Visualization, 1(2014).

    [19] Y. Zhou, G. Gallego, H. Rebecq, L. Kneip, H. Li, D. Scaramuzza. Semi-dense 3D reconstruction with a stereo event camera. Proceedings of the European Conference on Computer Vision, 242(2018).

    [20] X. Li, W. Li, X. Ma, X. Yin, X. Chen, J. Zhao. Spatial light path analysis and calibration of four-mirror-based monocular stereo vision. Opt. Express, 29, 31249(2021).

    [21] Z. Gao, Y. Gao, Y. Su, Y. Liu, Z. Fang, Y. Wang, Q. Zhang. Stereo camera calibration for large field of view digital image correlation using zoom lens. Measurement, 185, 109999(2021).

    [22] Z. Gao, F. Li, Y. Liu, T. Cheng, Y. Su, Z. Fang, M. Yang, Y. Li, J. Yu, Q. Zhang. Tunnel contour detection during construction based on digital image correlation. Opt. Lasers Eng., 126, 105879(2020).

    [23] Y. Shu, Z. Tan. 3D reconstruction based on spatial vanishing information. Chin. Opt. Lett., 3, 146(2005).

    [24] O. Matsushita, M. Tanaka, H. Kanki, M. Kobayashi, P. Keogh. Vibrations of Rotating Machinery(2017).

    Zeren Gao, Yong Su, Qingchuan Zhang. Single-event-camera-based 3D trajectory measurement method for high-speed moving targets[J]. Chinese Optics Letters, 2022, 20(6): 061101
    Download Citation