Lei Lu, Chenhao Bu, Zhilong Su, Banglei Guan, Qifeng Yu, Wei Pan, Qinghui Zhang, "Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging," Adv. Photon. 6, 046004 (2024)

Search by keywords or author
- Advanced Photonics
- Vol. 6, Issue 4, 046004 (2024)

Fig. 1. Diagrams for (a) synchronous FPP and (b) asynchronous FPP systems.

Fig. 2. Illumination and response relation between the projector and camera in (a) synchronous FPP and (b) async-FPP systems, respectively.

Fig. 3. APSNet with U-Net architecture and latent representation for AFP separation.

Fig. 4. Schematic description of training APSNet within the conditional GAN framework.

Fig. 5. Pipeline of 3D imaging with the trained APSNet.

Fig. 6. Experimental setup of our async-FPP system for generating the data set.

Fig. 7. (a) and (b) Two successively projected fringe patterns and (c) an observed AFP with 10 ms delay.

Fig. 8. (a)–(d) Convergence behavior of our APSNet model when training with different loss term contributions.

Fig. 9. Pipeline of AFP generation and separation with APSNet: (a) and (b) the source fringe patterns recorded by synchronized FPP, (c) the corresponding AFP recorded by async-FPP system, (d) and (e) are the separated results of (c), and (f) and (g) show the absolute discrepancy between the separated and source patterns.

Fig. 10. Inference performance of APSNet on AFPs with different aliasing levels.

Fig. 11. Comparison of wrapped phase maps for synchronous and asynchronous cases.

Fig. 12. Reconstructed results from (a) synchronous, (b) asynchronous, and (c) APSNet-generated fringe patterns, respectively.

Fig. 13. (a)–(c) Depth curves sampled from Figs. 12(a) –12(c) , respectively; (d) and (e) respond to the errors of (b) and (c) relative to (a).

Fig. 14. Reconstruction results for different objects with synchronous, asynchronous, and APSNet-generated sinusoidal and binary fringe patterns, respectively.

Fig. 15. Demonstration of generalization capability on multi-object asynchronous 3D imaging: (a) measured objects, with its fringe pattern examples before ( ) and after separation by our APSNet; (b)–(d) reconstructed results using synchronous, asynchronous, and APSNet-generated fringe patterns, respectively.

Fig. 16. Reconstruction errors of our method for (a) the propeller and (b) the 3D printed bolt models in Fig. 15 .

Set citation alerts for the article
Please enter your email address