• Photonics Research
  • Vol. 9, Issue 12, 2501 (2021)
Chen Bai1、†, Tong Peng1、2、†, Junwei Min1, Runze Li1, Yuan Zhou1, and Baoli Yao1、3、*
Author Affiliations
  • 1State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
  • 2Xi’an Jiaotong University, Xi’an 710049, China
  • 3Pilot National Laboratory for Marine Science and Technology (Qingdao), Qingdao 266200, China
  • show less
    DOI: 10.1364/PRJ.441054 Cite this Article Set citation alerts
    Chen Bai, Tong Peng, Junwei Min, Runze Li, Yuan Zhou, Baoli Yao. Dual-wavelength in-line digital holography with untrained deep neural networks[J]. Photonics Research, 2021, 9(12): 2501 Copy Citation Text show less

    Abstract

    Dual-wavelength in-line digital holography (DIDH) is one of the popular methods for quantitative phase imaging of objects with non-contact and high-accuracy features. Two technical challenges in the reconstruction of these objects include suppressing the amplified noise and the twin-image that respectively originate from the phase difference and the phase-conjugated wavefronts. In contrast to the conventional methods, the deep learning network has become a powerful tool for estimating phase information in DIDH with the assistance of noise suppressing or twin-image removing ability. However, most of the current deep learning-based methods rely on supervised learning and training instances, thereby resulting in weakness when it comes to applying this training to practical imaging settings. In this paper, a new DIDH network (DIDH-Net) is proposed, which encapsulates the prior image information and the physical imaging process in an untrained deep neural network. The DIDH-Net can effectively suppress the amplified noise and the twin-image of the DIDH simultaneously by automatically adjusting the weights of the network. The obtained results demonstrate that the proposed method with robust phase reconstruction is well suited to improve the imaging performance of DIDH.
    Φestm=Φpurem+2πεm=φm+2πcm+2πεm,

    View in Article

    L={ΛΦest1(x,y)Φest2(x,y)2π,Φest1(x,y)Φest2(x,y)0,ΛΦest1(x,y)Φest2(x,y)+2π2π,Φest1(x,y)Φest2(x,y)<0,

    View in Article

    [Icap1Icap2]=[a1r2+a1g2+a1b2a1ra2r+a1ga2g+a1ba2ba1ra2r+a1ga2g+a1ba2ba2r2+a2g2+a2b2]1[a1rIcapHR+a1gIcapHG+a1bIcapHBa2rIcapHR+a2gIcapHG+a2bIcapHB],

    View in Article

    L={ΛΦpure1(x,y)Φpure2(x,y)2π+Λ(ε1+ε2),Φpure1(x,y)Φpure2(x,y)0,ΛΦpure1(x,y)Φpure2(x,y)+2π2π+Λ(ε1+ε2),Φpure1(x,y)Φpure2(x,y)<0.

    View in Article

    Icapm=HDIDHm{Φpurem}+Ecapm=|Uobjm+Rrefm|2+Ecapm=[Uobjm]*Rrefm+Uobjm[Rrefm]*+|Uobjm|2+|Rrefm|2+Ecapm,

    View in Article

    Uobjm(z=d)=[Uobjm(z=0)]FFm,zexp[i2π(fxx+fyy)]dfxdfy=Gm,z[Φpurem(z=0)],

    View in Article

    Icapm(z=d)2Re{Uobjm(z=d)}+Ecapm(z=d)=2Re{Gm,z[Φpurem(z=0)]}+Ecapm(z=d)=HDIDHm,z{Φpurem(z=0)}.

    View in Article

    Φestm(z=0)=argminΦHDIDHm,z{Φpurem(z=0)}Icapm(z=d)22+r[Φpurem(z=0)],

    View in Article

    Rtypm=argminθΘRtypm[Icapm(z=d)]Φpurem(z=0)2,  [Φpurem,Icapm]STm.

    View in Article

    RDIDHm=argminθΘHDIDHm,z{RDIDHm[Icapm(z=d)]}Icapm(z=d)2.

    View in Article

    Φestm(z=0)=RDIDHm[Icapm(z=d)].

    View in Article

    IcapRGB(z=d)=IpureRGB(z=d)+EcapRGB(z=d)=ξZ+B,

    View in Article

    Chen Bai, Tong Peng, Junwei Min, Runze Li, Yuan Zhou, Baoli Yao. Dual-wavelength in-line digital holography with untrained deep neural networks[J]. Photonics Research, 2021, 9(12): 2501
    Download Citation