• Advanced Photonics Nexus
  • Vol. 3, Issue 5, 056003 (2024)
Ruiqing Sun1, Delong Yang1, Shaohui Zhang1,*, and Qun Hao1,2,*
Author Affiliations
  • 1Beijing Institute of Technology, School of Optics and Photonics, Beijing, China
  • 2Changchun University of Science and Technology, Changchun, China
  • show less
    DOI: 10.1117/1.APN.3.5.056003 Cite this Article Set citation alerts
    Ruiqing Sun, Delong Yang, Shaohui Zhang, Qun Hao, "Hybrid deep-learning and physics-based neural network for programmable illumination computational microscopy," Adv. Photon. Nexus 3, 056003 (2024) Copy Citation Text show less
    The imaging system we used and how our framework works. (a) The system we used in experiments to verify the effectiveness of our framework. (b) The overview of the framework we proposed, where PL refers to the physical layer in the physical model. (c) Reconstruction results of 10 LR target images captured using a 0.13NA objective.
    Fig. 1. The imaging system we used and how our framework works. (a) The system we used in experiments to verify the effectiveness of our framework. (b) The overview of the framework we proposed, where PL refers to the physical layer in the physical model. (c) Reconstruction results of 10 LR target images captured using a 0.13NA objective.
    The details of our framework. We describe the first step of the framework in (a), the second step in (c), and the last step in (d). We show the noise introduced by the physical model (PM) and an example output of the DL model in (b).
    Fig. 2. The details of our framework. We describe the first step of the framework in (a), the second step in (c), and the last step in (d). We show the noise introduced by the physical model (PM) and an example output of the DL model in (b).
    The overview of our proposed data augmentation methods. (a1) The ground truth of the resolution target. (a2) The background we extract. (a3) The ROI of the resolution target. (b) An example of simple samples. (c) An example of complex samples.
    Fig. 3. The overview of our proposed data augmentation methods. (a1) The ground truth of the resolution target. (a2) The background we extract. (a3) The ROI of the resolution target. (b) An example of simple samples. (c) An example of complex samples.
    The illumination model we generated. The gray circles represent the corresponding sample spectrum range when different LED lights are on.
    Fig. 4. The illumination model we generated. The gray circles represent the corresponding sample spectrum range when different LED lights are on.
    Experiments results on the USAF chart. (a) The captured image illuminated by the middle LED. (b) The low-resolution area without reconstruction. (c) The reconstruction result of the DL model trained on a simple data set. (d) The reconstruction result of the DL model trained on the complex data set. (e) The reconstruction result of the physical model (PM) initialized by the image middle LED illuminated. (f) The reconstruction result of the PM initialized by the output of the DL model trained on the complex data set. (g) The final reconstruction result of our framework. (h) The ground truth (GT) reconstructed from 121 LR images captured sequentially. (i)–(m) Detailed outputs from different methods.
    Fig. 5. Experiments results on the USAF chart. (a) The captured image illuminated by the middle LED. (b) The low-resolution area without reconstruction. (c) The reconstruction result of the DL model trained on a simple data set. (d) The reconstruction result of the DL model trained on the complex data set. (e) The reconstruction result of the physical model (PM) initialized by the image middle LED illuminated. (f) The reconstruction result of the PM initialized by the output of the DL model trained on the complex data set. (g) The final reconstruction result of our framework. (h) The ground truth (GT) reconstructed from 121 LR images captured sequentially. (i)–(m) Detailed outputs from different methods.
    (a) The captured image illuminated by the middle LED. (b) The LR image without reconstruction. (c) The reconstruction result of the first DL model. (d) The reconstruction result of the physical model (PM). (e) The final reconstruction result of our framework.
    Fig. 6. (a) The captured image illuminated by the middle LED. (b) The LR image without reconstruction. (c) The reconstruction result of the first DL model. (d) The reconstruction result of the physical model (PM). (e) The final reconstruction result of our framework.
    The comparison of the outputs between the DL and physics model (PM). (a) The LR image without reconstruction. (b) Reconstruction results of sample blank areas using the physics model. (c) Reconstruction results of sample blank areas using the data-driven DL model. (d) The final reconstruction result of our framework.
    Fig. 7. The comparison of the outputs between the DL and physics model (PM). (a) The LR image without reconstruction. (b) Reconstruction results of sample blank areas using the physics model. (c) Reconstruction results of sample blank areas using the data-driven DL model. (d) The final reconstruction result of our framework.
    MethodEvaluation Metric (amplitude)
    SSIM↑PSNR↑NIQE↓LPIPS↓
    Reconstruction with DL model0.53217.450.40.195
    Reconstruction with PM0.59422.862.20.129
    Reconstruction with our framework (DL, DL)0.58722.370.00.124
    Reconstruction with our framework (PM, PM)0.72826.249.90.108
    Reconstruction with our framework (DL, PM)0.74026.848.80.108
    Table 1. Comparison of results from different methods in ablation experiments.
    Ruiqing Sun, Delong Yang, Shaohui Zhang, Qun Hao, "Hybrid deep-learning and physics-based neural network for programmable illumination computational microscopy," Adv. Photon. Nexus 3, 056003 (2024)
    Download Citation