• Advanced Photonics
  • Vol. 6, Issue 4, 046004 (2024)
Lei Lu1, Chenhao Bu1, Zhilong Su2,3,*, Banglei Guan4..., Qifeng Yu4, Wei Pan5 and Qinghui Zhang1|Show fewer author(s)
Author Affiliations
  • 1Henan University of Technology, College of Information Science and Engineering, Zhengzhou, China
  • 2Shanghai University, Shanghai Institute of Applied Mathematics and Mechanics, School of Mechanics and Engineering Science, Shanghai Key Laboratory of Mechanics in Energy Engineering, Shanghai, China
  • 3Shaoxing Research Institute of Shanghai University, Shaoxing, China
  • 4National University of Defense Technology, College of Aerospace Science and Engineering, Changsha, China
  • 5OPT Machine Vision Tech Co., Ltd., Department of Research and Development, Dongguan, China
  • show less
    DOI: 10.1117/1.AP.6.4.046004 Cite this Article Set citation alerts
    Lei Lu, Chenhao Bu, Zhilong Su, Banglei Guan, Qifeng Yu, Wei Pan, Qinghui Zhang, "Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging," Adv. Photon. 6, 046004 (2024) Copy Citation Text show less
    Diagrams for (a) synchronous FPP and (b) asynchronous FPP systems.
    Fig. 1. Diagrams for (a) synchronous FPP and (b) asynchronous FPP systems.
    Illumination and response relation between the projector and camera in (a) synchronous FPP and (b) async-FPP systems, respectively.
    Fig. 2. Illumination and response relation between the projector and camera in (a) synchronous FPP and (b) async-FPP systems, respectively.
    APSNet with U-Net architecture and latent representation for AFP separation.
    Fig. 3. APSNet with U-Net architecture and latent representation for AFP separation.
    Schematic description of training APSNet within the conditional GAN framework.
    Fig. 4. Schematic description of training APSNet within the conditional GAN framework.
    Pipeline of 3D imaging with the trained APSNet.
    Fig. 5. Pipeline of 3D imaging with the trained APSNet.
    Experimental setup of our async-FPP system for generating the data set.
    Fig. 6. Experimental setup of our async-FPP system for generating the data set.
    (a) and (b) Two successively projected fringe patterns and (c) an observed AFP with 10 ms delay.
    Fig. 7. (a) and (b) Two successively projected fringe patterns and (c) an observed AFP with 10 ms delay.
    (a)–(d) Convergence behavior of our APSNet model when training with different loss term contributions.
    Fig. 8. (a)–(d) Convergence behavior of our APSNet model when training with different loss term contributions.
    Pipeline of AFP generation and separation with APSNet: (a) and (b) the source fringe patterns recorded by synchronized FPP, (c) the corresponding AFP recorded by async-FPP system, (d) and (e) are the separated results of (c), and (f) and (g) show the absolute discrepancy between the separated and source patterns.
    Fig. 9. Pipeline of AFP generation and separation with APSNet: (a) and (b) the source fringe patterns recorded by synchronized FPP, (c) the corresponding AFP recorded by async-FPP system, (d) and (e) are the separated results of (c), and (f) and (g) show the absolute discrepancy between the separated and source patterns.
    Inference performance of APSNet on AFPs with different aliasing levels.
    Fig. 10. Inference performance of APSNet on AFPs with different aliasing levels.
    Comparison of wrapped phase maps for synchronous and asynchronous cases.
    Fig. 11. Comparison of wrapped phase maps for synchronous and asynchronous cases.
    Reconstructed results from (a) synchronous, (b) asynchronous, and (c) APSNet-generated fringe patterns, respectively.
    Fig. 12. Reconstructed results from (a) synchronous, (b) asynchronous, and (c) APSNet-generated fringe patterns, respectively.
    (a)–(c) Depth curves sampled from Figs. 12(a)–12(c), respectively; (d) and (e) respond to the errors of (b) and (c) relative to (a).
    Fig. 13. (a)–(c) Depth curves sampled from Figs. 12(a)12(c), respectively; (d) and (e) respond to the errors of (b) and (c) relative to (a).
    Reconstruction results for different objects with synchronous, asynchronous, and APSNet-generated sinusoidal and binary fringe patterns, respectively.
    Fig. 14. Reconstruction results for different objects with synchronous, asynchronous, and APSNet-generated sinusoidal and binary fringe patterns, respectively.
    Demonstration of generalization capability on multi-object asynchronous 3D imaging: (a) measured objects, with its fringe pattern examples before (In) and after (Pm,Pm+1) separation by our APSNet; (b)–(d) reconstructed results using synchronous, asynchronous, and APSNet-generated fringe patterns, respectively.
    Fig. 15. Demonstration of generalization capability on multi-object asynchronous 3D imaging: (a) measured objects, with its fringe pattern examples before (In) and after (Pm,Pm+1) separation by our APSNet; (b)–(d) reconstructed results using synchronous, asynchronous, and APSNet-generated fringe patterns, respectively.
    Reconstruction errors of our method for (a) the propeller and (b) the 3D printed bolt models in Fig. 15.
    Fig. 16. Reconstruction errors of our method for (a) the propeller and (b) the 3D printed bolt models in Fig. 15.
    Lei Lu, Chenhao Bu, Zhilong Su, Banglei Guan, Qifeng Yu, Wei Pan, Qinghui Zhang, "Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging," Adv. Photon. 6, 046004 (2024)
    Download Citation