• Chinese Optics Letters
  • Vol. 22, Issue 10, 101101 (2024)
Pengcheng Ji1, Qingfan Wu1, Shengfu Cao1, Huijuan Zhang1..., Zhaohua Yang2,* and Yuanjin Yu1,3,4,**|Show fewer author(s)
Author Affiliations
  • 1School of Automation, Beijing Institute of Technology, Beijing 100081, China
  • 2School of Instrumentation Science and Optoelectronics Engineering, Beijing University, Beijing 100191, China
  • 3MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing 100081, China
  • 4Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
  • show less
    DOI: 10.3788/COL202422.101101 Cite this Article Set citation alerts
    Pengcheng Ji, Qingfan Wu, Shengfu Cao, Huijuan Zhang, Zhaohua Yang, Yuanjin Yu, "Single-pixel imaging of a moving object with multi-motion," Chin. Opt. Lett. 22, 101101 (2024) Copy Citation Text show less
    Process of imaging. (a) represents the normalized difference mode for a first-order moment and central moment, employing Floyd–Steinberg dithering for binarization. (b) demonstrates a sequence of modulated modes. (c) shows a rotating moving target with a circular trajectory and detected light intensity signals. (d) Reconstructed pattern sequences after shifting in the reverse direction.
    Fig. 1. Process of imaging. (a) represents the normalized difference mode for a first-order moment and central moment, employing Floyd–Steinberg dithering for binarization. (b) demonstrates a sequence of modulated modes. (c) shows a rotating moving target with a circular trajectory and detected light intensity signals. (d) Reconstructed pattern sequences after shifting in the reverse direction.
    Trajectory and rotation angle of the target. (a) Trajectories in the x- and y-directions. (b) Angle of the rotation of the target.
    Fig. 2. Trajectory and rotation angle of the target. (a) Trajectories in the x- and y-directions. (b) Angle of the rotation of the target.
    Target simulation results in the simple scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    Fig. 3. Target simulation results in the simple scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    Target simulation results in the complex scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    Fig. 4. Target simulation results in the complex scene. (a) The original image. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    Diagram of the experimental setup. The LED is the light source, and the target moves and rotates through three-axis motorized stages. The collecting lens projects the image of the target onto the DMD for modulation, and the modulated light intensity is collected on the PMT by the converging lens. The PMT converts the optical signal into an electrical signal, which is captured by the acquisition card and sent to the computer.
    Fig. 5. Diagram of the experimental setup. The LED is the light source, and the target moves and rotates through three-axis motorized stages. The collecting lens projects the image of the target onto the DMD for modulation, and the modulated light intensity is collected on the PMT by the converging lens. The PMT converts the optical signal into an electrical signal, which is captured by the acquisition card and sent to the computer.
    Experimental results. (a) Full sampling reconstruction of images. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    Fig. 6. Experimental results. (a) Full sampling reconstruction of images. (b) The target image reconstructed by our method. (c) The target image reconstructed by the GM method. (d) The actual position and the calculation results of the two methods. (e) The comparison of the angular errors of the two methods. (f) The comparison of the axial length errors of the two methods.
    SceneMethodMSE
    ΔxΔyΔθΔr
    Simple sceneOur method0.350.347.780.12
    GM method3.643.73206.593.55
    Complex sceneOur method0.680.711.691.03
    GM method7.716.04315.496.04
    Table 1. Errors of the Motion Parameters in the Two Methods
    Pengcheng Ji, Qingfan Wu, Shengfu Cao, Huijuan Zhang, Zhaohua Yang, Yuanjin Yu, "Single-pixel imaging of a moving object with multi-motion," Chin. Opt. Lett. 22, 101101 (2024)
    Download Citation