• Advanced Photonics
  • Vol. 6, Issue 4, 046004 (2024)
Lei Lu1, Chenhao Bu1, Zhilong Su2,3,*, Banglei Guan4..., Qifeng Yu4, Wei Pan5 and Qinghui Zhang1|Show fewer author(s)
Author Affiliations
  • 1Henan University of Technology, College of Information Science and Engineering, Zhengzhou, China
  • 2Shanghai University, Shanghai Institute of Applied Mathematics and Mechanics, School of Mechanics and Engineering Science, Shanghai Key Laboratory of Mechanics in Energy Engineering, Shanghai, China
  • 3Shaoxing Research Institute of Shanghai University, Shaoxing, China
  • 4National University of Defense Technology, College of Aerospace Science and Engineering, Changsha, China
  • 5OPT Machine Vision Tech Co., Ltd., Department of Research and Development, Dongguan, China
  • show less
    DOI: 10.1117/1.AP.6.4.046004 Cite this Article Set citation alerts
    Lei Lu, Chenhao Bu, Zhilong Su, Banglei Guan, Qifeng Yu, Wei Pan, Qinghui Zhang, "Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging," Adv. Photon. 6, 046004 (2024) Copy Citation Text show less

    Abstract

    Three-dimensional (3D) imaging with structured light is crucial in diverse scenarios, ranging from intelligent manufacturing and medicine to entertainment. However, current structured light methods rely on projector–camera synchronization, limiting the use of affordable imaging devices and their consumer applications. In this work, we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint, accomplishing the challenges of fringe pattern aliasing, without relying on any a priori constraint of the projection system. To overcome this need, we propose a generative deep neural network with U-Net-like encoder–decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing. We train within an adversarial learning framework and supervise the network training via a statistics-informed loss function. We demonstrate that by evaluating the performance on fields of intensity, phase, and 3D reconstruction. It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one: the absolute error is no greater than 8 μm, and the standard deviation does not exceed 3 μm. Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.
    Ip=0teBpBrp¯R(p,p¯)Pp¯(t)dt+0teCp(t)dt+0teBrCp(t)dt.

    View in Article

    In:=tn1tnPn(t)dt,

    View in Article

    In=tn1tnm=1Mrm(t)Pm(t)dt,

    View in Article

    rm(t)={1if  t¯m1tt¯m0otherwise,

    View in Article

    In=tn1tnm=1Mrm(t)Pm(t)dt=tn1tnswPm(t)dt+tnswtnPm+1(t)dt,

    View in Article

    LCGAN(G,D)=EPpdata(P)(logD({Pm,Pm+1}|{P^m,P^m+1}))+Ezpz(z)(log(1D(G(z|In)))).

    View in Article

    LMSE=1ni=1n(PiP^i)2,

    View in Article

    LSSE=(2μP^μP+c)(2σP^P+c)(μP^2+μP2+c)(σP^2+σP2+c),

    View in Article

    L(G,D)=λ1LCGAN(G,D)+λ2LMSE(G)+λ3LSSE(G),

    View in Article

    {wGwGηGwG(λ1LCGAN+λ2LMSE+λ3LSSE),wDwDηDLCGANwD,  

    View in Article

    Lei Lu, Chenhao Bu, Zhilong Su, Banglei Guan, Qifeng Yu, Wei Pan, Qinghui Zhang, "Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging," Adv. Photon. 6, 046004 (2024)
    Download Citation