• Photonics Research
  • Vol. 11, Issue 1, 1 (2023)
Yuezhi He1、2, Jing Yao1、2, Lina Liu1、2, Yufeng Gao1、2, Jia Yu1、2, Shiwei Ye1、2, Hui Li1、2, and Wei Zheng1、2、*
Author Affiliations
  • 1Research Center for Biomedical Optics and Molecular Imaging, Shenzhen Key Laboratory for Molecular Imaging, Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
  • 2CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
  • show less
    DOI: 10.1364/PRJ.469231 Cite this Article Set citation alerts
    Yuezhi He, Jing Yao, Lina Liu, Yufeng Gao, Jia Yu, Shiwei Ye, Hui Li, Wei Zheng. Self-supervised deep-learning two-photon microscopy[J]. Photonics Research, 2023, 11(1): 1 Copy Citation Text show less
    Overview of proposed framework. The input image is first cropped and augmented into patches. The downsampled version of the patches is then used as the input for training, where the original patches serve as the target output. At the test phase, the input image is fed to the trained network to produce high-resolution output.
    Fig. 1. Overview of proposed framework. The input image is first cropped and augmented into patches. The downsampled version of the patches is then used as the input for training, where the original patches serve as the target output. At the test phase, the input image is fed to the trained network to produce high-resolution output.
    Zoomed-in images of neurons and their line profiles across the white dashed line. (a) Lateral images. (b) Axial images.
    Fig. 2. Zoomed-in images of neurons and their line profiles across the white dashed line. (a) Lateral images. (b) Axial images.
    Evaluation of four super-resolution models. Lateral and axial images of low-resolution input, original reference, and network outputs of neuron cells. Our proposed model shows low error. (a) Representative lateral images inferred from low-resolution input. The absolute error images with respect to the original are shown below. (b) Representative axial images inferred from low-resolution input.
    Fig. 3. Evaluation of four super-resolution models. Lateral and axial images of low-resolution input, original reference, and network outputs of neuron cells. Our proposed model shows low error. (a) Representative lateral images inferred from low-resolution input. The absolute error images with respect to the original are shown below. (b) Representative axial images inferred from low-resolution input.
    PSNR and SSIM evaluation between the four models.
    Fig. 4. PSNR and SSIM evaluation between the four models.
    Large image inference using a high-resolution input. High-resolution image (1024×1024 px) can still benefit from the network (details shown on the right-hand side of the inset). The downsampled low-resolution version of the same input with its network enhanced image is shown on the left-hand side of the inset for comparison.
    Fig. 5. Large image inference using a high-resolution input. High-resolution image (1024×1024  px) can still benefit from the network (details shown on the right-hand side of the inset). The downsampled low-resolution version of the same input with its network enhanced image is shown on the left-hand side of the inset for comparison.
    Volumetric image inference using a high-resolution input. Top left: input lateral slice; top right: corresponding output slice; bottom left: input axial slice; bottom right: corresponding output slice.
    Fig. 6. Volumetric image inference using a high-resolution input. Top left: input lateral slice; top right: corresponding output slice; bottom left: input axial slice; bottom right: corresponding output slice.
    Large image inference of Self-Vision (image brightness adjusted for visualization). Despite being trained on a small FOV (indicated by yellow border), Self-Vision can infer the entire FOV for the system, saving both training and acquisition time.
    Fig. 7. Large image inference of Self-Vision (image brightness adjusted for visualization). Despite being trained on a small FOV (indicated by yellow border), Self-Vision can infer the entire FOV for the system, saving both training and acquisition time.
    Network performance improves as the training FOV increases. At the top left corner, the boxes with small, medium, and large sizes indicate different input training volumes (not drawn to scale). The plot at the top right shows that network performance improves as the voxel number increases. The bottom images [(c)–(j) lateral, (k)–(p) axial] illustrate the change of the output when the training FOV increases from a small volume to a large volume.
    Fig. 8. Network performance improves as the training FOV increases. At the top left corner, the boxes with small, medium, and large sizes indicate different input training volumes (not drawn to scale). The plot at the top right shows that network performance improves as the voxel number increases. The bottom images [(c)–(j) lateral, (k)–(p) axial] illustrate the change of the output when the training FOV increases from a small volume to a large volume.
    Architecture of Self-Vision. Some grouped convolution layers were omitted in the figure for simplicity.
    Fig. 9. Architecture of Self-Vision. Some grouped convolution layers were omitted in the figure for simplicity.
    MethodsModalityTraining Image SizeTraining Data SizeTraining Time2D Inference (1024×1024)3D Inference (1024×1024×152)
    DFCANNikon A1R-MP1024×1024×152×40.6 GB2.5 h0.2 sN/A
    PSSR1.2 h0.6 sN/A
    DSP-Net11.2 hN/A120 s
    Ours256×256×38N/A6 min0.5 s62 s
    Table 1. Summary of Parameters Related to Network Training for Performance Comparison
    Yuezhi He, Jing Yao, Lina Liu, Yufeng Gao, Jia Yu, Shiwei Ye, Hui Li, Wei Zheng. Self-supervised deep-learning two-photon microscopy[J]. Photonics Research, 2023, 11(1): 1
    Download Citation