• Advanced Photonics Nexus
  • Vol. 1, Issue 1, 014001 (2022)
Kaiqiang Wang1、2, Qian Kemao3、*, Jianglei Di1、2、4、*, and Jianlin Zhao1、2、*
Author Affiliations
  • 1Northwestern Polytechnical University, School of Physical Science and Technology, Shaanxi Key Laboratory of Optical Information Technology, Xi’an, China
  • 2Ministry of Industry and Information Technology, Key Laboratory of Light Field Manipulation and Information Acquisition, Xi’an, China
  • 3Nanyang Technological University, School of Computer Science and Engineering, Singapore
  • 4Guangdong University of Technology, Guangdong Provincial Key Laboratory of Photonics Information Technology, Guangzhou, China
  • show less
    DOI: 10.1117/1.APN.1.1.014001 Cite this Article
    Kaiqiang Wang, Qian Kemao, Jianglei Di, Jianlin Zhao. Deep learning spatial phase unwrapping: a comparative review[J]. Advanced Photonics Nexus, 2022, 1(1): 014001 Copy Citation Text show less

    Abstract

    Phase unwrapping is an indispensable step for many optical imaging and metrology techniques. The rapid development of deep learning has brought ideas to phase unwrapping. In the past four years, various phase dataset generation methods and deep-learning-involved spatial phase unwrapping methods have emerged quickly. However, these methods were proposed and analyzed individually, using different strategies, neural networks, and datasets, and applied to different scenarios. It is thus necessary to do a detailed comparison of these deep-learning-involved methods and the traditional methods in the same context. We first divide the phase dataset generation methods into random matrix enlargement, Gauss matrix superposition, and Zernike polynomials superposition, and then divide the deep-learning-involved phase unwrapping methods into deep-learning-performed regression, deep-learning-performed wrap count, and deep-learning-assisted denoising. For the phase dataset generation methods, the richness of the datasets and the generalization capabilities of the trained networks are compared in detail. In addition, the deep-learning-involved methods are analyzed and compared with the traditional methods in ideal, noisy, discontinuous, and aliasing cases. Finally, we give suggestions on the best methods for different situations and propose the potential development direction for the dataset generation method, neural network structure, generalization ability enhancement, and neural network training strategy for the deep-learning-involved spatial phase unwrapping methods.
    ψ(r)=φ(r)+2πk(r),

    View in Article

    ψ=W[φ]={φ,|φ|πφ2π,φ>πφ+2π,φ<π,

    View in Article

    ψ(r1)=ψ(r0)+Lψdr=ψ(r0)+LW[φ]dr,

    View in Article

    ψO=argminψt[f(ψtW[φ])dA],

    View in Article

    ψC=ψM+W[φψM],

    View in Article

    Composite loss(r)=1M{m=1MkG(r)log[k(r)]+m=1M|ψ(r)ψG(r)|},

    View in Article

    R(x,y)=cos[ψ(r)],

    View in Article

    I(x,y)=sin[ψ(r)],

    View in Article

    φ(r)=arctan[I(r)/R(r)],

    View in Article

    k(r)=round[ψ(r)φ(r)2π].

    View in Article

    Pretrain loss=[f(φ)][ψ],

    View in Article

    Retrain loss=W[φ][f(φ)].

    View in Article

    Kaiqiang Wang, Qian Kemao, Jianglei Di, Jianlin Zhao. Deep learning spatial phase unwrapping: a comparative review[J]. Advanced Photonics Nexus, 2022, 1(1): 014001
    Download Citation