Fig. 1. Phase unwrapping in OI,^{1} MRI,^{2} FPP,^{4} and InSAR.^{6}
Fig. 2. Datasets of the deeplearninginvolved phase unwrapping methods, for (a) dRG, (b) dWC, and (c) dDN. “$R$” and “$I$” represent the real and imaginary parts of CAF, respectively.
Fig. 3. Overall process of deeplearninginvolved phase unwrapping methods.
Fig. 4. Illustration of the dRG method.
Fig. 5. Illustration of the dWC method.
Fig. 6. Illustration of the dDN method.
Fig. 7. An example of the RME method.
Fig. 8. An example of the GFS method.
Fig. 9. Entropy histogram of absolute phases from the D_RME, D_GFS, and D_ZPS.
Fig. 10. SAGD maps of different datasets. Red arrows and circles indicate low and high SAGD values, respectively.
Fig. 11. Mean error maps for each network. Red circles indicate high mean error value.
Fig. 12. (a) SAGD maps for D_RME and D_RME1, (b) mean error maps for RMENet and RME1Net. Red arrows indicate low SAGD value. Red circles indicate high mean error value and orange circles indicate the comparison part.
Fig. 13. Partial display of results from RME1Net. “Max”, “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “C” represents the congruence results.
Fig. 14. Results for the (a) dRGI and (b) dWCI in the ideal case. “Max,” “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “C” represents the congruence results.
Fig. 15. ${\mathrm{RMSE}}_{\mathrm{m}}$ of the deeplearninginvolved methods for absolute phase in different heights.
Fig. 16. Results for (a) dRGN, (b) dWCN, and (c) dDNN in the noisy case. “GT” represents the pure GT (pure absolute phase), while “GT1” represents the noisy GT (noisy absolute phase). “Max,” “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “C” represents the congruence results.
Fig. 17. Results in different noise levels. Solid and dashed lines represent the deeplearninginvolved and traditional methods, respectively.
Fig. 18. Results for (a) dRGI, (b) dWCI, (c) dRGD, (d) dWCD, (e) linescanning, (f) LS, and (g) QG methods in the discontinuous case. “Max,” “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “C” represents the congruence results. The last columns of each result are discontinuous maps, where 1 (white) represents the position of the discontinuous pixels.
Fig. 19. Results for (a) dRGA, (b) dWCA, (c) linescanning, (d) LS, and (e) QG methods in the aliasing case. “Max,” “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “C” represents the congruence results. The last columns of each result are aliasing maps, where 1 (white) represents the position of the aliasing pixels.
Fig. 20. Results for (a) dRGM, (b) dWCM, (c) linescanning, (d) LS, and (e) QG methods in the mixed case. “Max,” “Med,” and “Min” represent specific results with maximal, median, and minimal ${\mathrm{RMSE}}_{\mathrm{m}}$, respectively. “$\mathrm{C}$” represents the congruence results. The last columns of each result are aliasing or discontinuous maps (called “$\text{A}$ and $\text{D}$”), where 1 (white) represents the position of the aliasing or discontinuous pixels.
Fig. 21. Schematic diagram of pretraining and retraining.
Fig. 22. Loss plot of pretrained and initialized networks.
Method  Date  Author  Ref.  Network  Dataset  Loss function  dRG  2018  Dardikman and Shaked  22  —  —  —  Dardikman et al.  23  ResNet  RDR  MSE  2019  Wang et al.  24  ResUNet  RME  MSE  He et al.  25  3DResNet  —  —  Ryu et al.  26  RNN  —  Total variation + error variation  2020  DardikmanYoffe et al.  27  ResUNet  RDR  MSE  Qin et al.  28  ResUNet  RME  MAE  2021  Perera and De Silva  29  LSTM  GFS  Total variation + error variation  Park et al.  30  GAN  RDR  MAE + adversarial loss  Zhou et al.  31  UNet  RDR  MAE + residues  2022  Xu et al.  32  MNet  RME  MAE and MSSSIM  Zhou et al.  33  GAN  RDR  MAE + adversarial loss  dWC  2018  Liang et al.  34  —  —  —  Spoorthi et al.  35  SegNet  GFS  CE  2019  Zhang et al.  36  UNet  ZPS  CE  Zhang et al.  37  DeepLabV3+  ZPS  CE  2020  Wu et al.  38  FRResUNet  GFS  CE  Spoorthi et al.  39  DenseUNet  GFS  MAE + residues + CE  Zhao et al.  40  RAENet  ZPS  CE  2021  Zhu et al.  41  DeepLabV3+  ZPS  CE  2022  Vengala et al.  42,43  TriNet  GSF  MSE + CE  Zhang and Li  44  EESANet  GSF  Weighted CE  dDN  2020  Yan et al.  45  ResNet  ZPS  MSE 

Table 1. Summary of deeplearninginvolved phase unwrapping methods. “—” indicates “not available.”
Datasets  Size  Proportion of $h$ from 10 to 30  Proportion of $h$ from 30 to 35  Proportion of $h$ from 35 to 40  Training part of D_RME  20,000  50%  20%  30%  Testing part of D_RME  2000  2/3  1/6  1/6  Training part of D_GSF  20,000  50%  20%  30%  Testing part of D_GSF  2000  2/3  1/6  1/6  Training part of D_ZPS  20,000  50%  20%  30%  Testing part of D_ZPS  2,000  2/3  1/6  1/6  D_RDR for testing  421  —  —  — 

Table 2. Summary of datasets. “—” indicates “not available.”
  D_RME  D_GFS  D_ZPS  D_RDR  ${\mathrm{RMSE}}_{\mathrm{m}}$  RMENet  0.0910  0.0982  0.1336  0.1103  GSFNet  0.2263  0.0985  0.1133  0.1184  ZPSNet  2.5148  0.4221  0.0821  0.8245  ${\mathrm{RMSE}}_{\text{sd}}$  RMENet  0.0507  0.1037  0.2320  0.1003  GSFNet  0.4571  0.0234  0.1077  0.1557  ZPSNet  2.8249  0.6252  0.0220  1.1405  PFS  RMENet  0.0010  0.0085  0.1270  0.0594  GSFNet  0.1485  0.0020  0.0560  0.0333  ZPSNet  0.6525  0.4075  0.0010  0.4679 

Table 3. RMSEm, RMSEsd, and PFS of phase unwrapping results of RMENet, GFSNet, and ZPSNet.
Cases  Datasets  Networks  Loss functions  Ideal case (Sec. 4.2)  $\{\phi ,\psi \}$  dRGI  MAE  $\{\phi ,k\}$  dWCI  CE + MAE  Noisy case (Sec. 4.3)  $\{{\phi}_{n},\psi \}$  dRGN  MAE  $\{{\phi}_{n},k\}$  dWCN  CE+MAE  {${R}_{n}$ and ${I}_{n},R$ and $I$}  dDNN  MAE  Discontinuous case (Sec. 4.4)  $\{{\phi}_{d},{\psi}_{d}\}$  dRGD  MAE  $\{{\phi}_{d},{k}_{d}\}$  dWCD  CE + MAE  Aliasing case (Sec. 4.5)  $\{{\phi}_{a},{\psi}_{a}\}$  dRGA  MAE  $\{{\phi}_{a},{k}_{a}\}$  dWCA  CE + MAE  Mixed case (Sec. 4.6)  $\{{\phi}_{m},{\psi}_{m}\}$  dRGM  MAE  $\{{\phi}_{m},{k}_{m}\}$  dWCM  CE + MAE 

Table 4. Summary of networks and corresponding datasets. The form of the dataset is {Input, GT}. The last letter of the network name is the case (“I” for ideal, “N” for noisy, “D” for discontinuous, “A” for aliasing, and “M” for mixed).
 dRGI  dRGIC  dWCI  ${\mathrm{RMSE}}_{\mathrm{m}}$  0.0989  0.0005  0.0008  ${\mathrm{RMSE}}_{\mathrm{sd}}$  0.0515  0.0157  0.0251  PFS  0.0015  0.0015  0.0025  PIP  0.0044  0.0044  0.0054 

Table 5. RMSEm, RMSEsd, PFS, and PIP of the deeplearninginvolved methods in the ideal case. “C” represents the congruence results.
 dRGN (GT)  dRGNC (GT1)  dWCN (GT1)  dDNN (GT)  dDNNC (GT1)  ${\mathrm{RMSE}}_{\mathrm{m}}$  0.1367  0.0285  0.0435  0.0883  0.0229  ${\mathrm{RMSE}}_{\mathrm{sd}}$  0.1154  0.1148  0.1197  0.2915  0.3056  PFS  0.2525  0.2525  0.2840  0.1976  0.1976  PIP  0.0013  0.0013  0.0014  0.0108  0.0088 

Table 6. RMSEm, RMSEsd, PFS, and PIP of the deeplearninginvolved methods in the noisy case. “GT” represents the pure GT (pure absolute phase), while “GT1” represents the noisy GT (noisy absolute phase). “C” represents the congruence results.
 dRGI  dRGD  dRGDC  dWCI  dWCD  Linescanning  LS  QG  ${\mathrm{RMSE}}_{\mathrm{m}}$  2.0230  0.1230  0.0261  1.2209  0.0219  3.8054  1.3655  2.4204  ${\mathrm{RMSE}}_{\mathrm{sd}}$  1.7817  0.1636  0.1827  1.3777  0.1543  3.7172  1.0408  2.5014  PFS  0.8120  0.0770  0.0770  0.7385  0.0785  0.9405  0.7120  0.8565  PIP  0.2407  0.0112  0.0112  0.1128  0.0077  0.4400  0.1073  0.2789 

Table 7. RMSEm, RMSEsd, PFS, and PIP of the deeplearninginvolved and traditional methods in the discontinuous case. “C” represents the congruence results.
 dRGA  dRGAC  dWCA  Linescanning  LS  QG  ${\mathrm{RMSE}}_{\mathrm{m}}$  0.1958  0.0078  0.0107  40.5128  6.7199  39.8846  ${\mathrm{RMSE}}_{\mathrm{sd}}$  0.1390  0.1503  0.1612  21.0695  3.1294  23.0389  PFS  0.0075  0.0075  0.0120  0.9820  0.9895  0.9895  PIP  0.0765  0.0765  0.0467  0.9102  0.5705  0.8369 

Table 8. RMSEm, RMSEsd, PFS, and PIP of the deeplearninginvolved and traditional methods in the aliasing case. “C” represents the congruence results.
 dRGM  dRGMC  dWCM  Linescanning  LS  QG  ${\mathrm{RMSE}}_{\mathrm{m}}$  0.2362  0.1266  0.2206  38.4389  10.8350  39.4653  ${\mathrm{RMSE}}_{\mathrm{sd}}$  0.3101  0.3790  0.4618  21.0695  3.6269  18.1084  PFS  0.3740  0.3740  0.4810  1.0000  1.0000  1.0000  PIP  0.0106  0.0106  0.0107  0.9569  0.7600  0.9107 

Table 9. RMSEm, RMSEsd, PFS, and PIP of the deeplearninginvolved and traditional methods in the mixed case. “C” represents the congruence results.
Cases  dRG  dWC  dDN  Linescanning  LS  QG  WFTQG  Ideal  ✓  ✓  ✓  ✓✓  ✓  ✓  —  Slight noise  ✓  ✓  ✓  ✓  ✓  ✓  ✓  Moderate noise  ✓✓  ✓✓  ✓  ✗  ✗  ✗  ✓✓  Severe noise  ✓✓  ✓✓  ✓  ✗  ✗  ✗  ✓✓  Discontinuity  ✓✓  ✓✓  —  ✗  ✗  ✗  —  Aliasing  ✓✓  ✓✓  —  ✗  ✗  ✗  —  Mixed  ✓✓  ✓✓  —  ✗  ✗  ✗  — 

Table 10. Performance statistics in the ideal, noisy, discontinuous, and aliasing cases. “✓” represents “capable.” “✓✓” represents “best and recommended.” “✗” represents “incapable.” “—” indicates “not applicable.”