• Photonics Research
  • Vol. 12, Issue 1, 7 (2024)
Ze-Hao Wang1、2、†, Long-Kun Shan1、2、†, Tong-Tian Weng1、2, Tian-Long Chen3, Xiang-Dong Chen1、2、4, Zhang-Yang Wang3, Guang-Can Guo1、2、4, and Fang-Wen Sun1、2、4、*
Author Affiliations
  • 1CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
  • 2CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
  • 3University of Texas at Austin, Austin, Texas 78705, USA
  • 4Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
  • show less
    DOI: 10.1364/PRJ.488310 Cite this Article Set citation alerts
    Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, Fang-Wen Sun, "Learning the imaging mechanism directly from optical microscopy observations," Photonics Res. 12, 7 (2024) Copy Citation Text show less

    Abstract

    The optical microscopy image plays an important role in scientific research through the direct visualization of the nanoworld, where the imaging mechanism is described as the convolution of the point spread function (PSF) and emitters. Based on a priori knowledge of the PSF or equivalent PSF, it is possible to achieve more precise exploration of the nanoworld. However, it is an outstanding challenge to directly extract the PSF from microscopy images. Here, with the help of self-supervised learning, we propose a physics-informed masked autoencoder (PiMAE) that enables a learnable estimation of the PSF and emitters directly from the raw microscopy images. We demonstrate our method in synthetic data and real-world experiments with significant accuracy and noise robustness. PiMAE outperforms DeepSTORM and the Richardson–Lucy algorithm in synthetic data tasks with an average improvement of 19.6% and 50.7% (35 tasks), respectively, as measured by the normalized root mean square error (NRMSE) metric. This is achieved without prior knowledge of the PSF, in contrast to the supervised approach used by DeepSTORM and the known PSF assumption in the Richardson–Lucy algorithm. Our method, PiMAE, provides a feasible scheme for achieving the hidden imaging mechanism in optical microscopy and has the potential to learn hidden mechanisms in many more systems.

    1. INTRODUCTION

    Optical microscopy is of great importance in scientific research to observe the nanoworld. The common view is that the Abbe diffraction limit describes the lower bound of the spot size and thus limits the microscopic resolution. However, recent studies have demonstrated that by designing and measuring the point spread function (PSF) or equivalent PSF of microscopy, it is possible to achieve subdiffraction limit localization of emitters. Techniques such as photoactivated localization microscopy [1] and stochastic optical reconstruction microscopy [2] attain superresolution molecular localization through selective excitation and reconstruction algorithms that are based on the microscopy PSF. The spatial mode sorting-based microscopic imaging method (SPADE) [3] can be treated as a deconvolution problem using higher-order modes as the equivalent PSF. Stimulated-emission depletion microscopy achieves superresolution imaging by introducing illumination with donut-shaped PSFs to selectively deactivate fluorophores [4,5]. Additionally, deep-learning-based methods, such as DeepSTORM [6] and DECODE [7], use deep neural networks (DNNs) to predict emitters in raw images by synthesizing training sets with the same PSFs as those used in actual experiments. In all of these microscopic imaging techniques, prior knowledge of the PSF is crucial, making it of great interest to develop a method for directly estimating the PSF from raw images.

    Currently, some traditional algorithms such as Deconvblind [8] use maximum likelihood estimation to infer the PSF and emitters from raw images [918]. However, these algorithms face two challenges. First, they struggle to estimate PSFs with complex shapes. Second, they can lead to trivial solutions where the PSF is a δ function and the image of the emitters is equal to the raw image. To tackle these issues, researchers have turned to using DNNs [19]. However, this requires a library of PSFs and a large number of sharp microscope images to generate the training data set, which limits the application of these algorithms.

    We use self-supervised learning to overcome the above challenges. Here, we treat the PSF as the pattern hidden in the raw images and the emitters as the sparse representation of the raw image. As a result, we propose a physics-informed masked autoencoder (PiMAE, Fig. 1) that estimates the PSF and emitters directly from the microscopy raw images. Using raw data synthesized by various simulated PSFs, we compare the results of PiMAE and Deconvblind [8] for estimating PSF, as well as PiAME, the Richardson–Lucy algorithm [20], and DeepSTORM [6] for localizing emitters. Our proposed self-supervised learning approach, PiMAE, outperforms existing algorithms without the need for data annotation or PSF measurement. PiMAE demonstrates a significant performance improvement, as measured by the normalized root mean square error (NRMSE) metric, and is highly resistant to noise. In tests with real-world experiments, PiMAE resolves wide-field microscopy images of standard PSF, out-of-focus PSF, and aberrated PSF with high quality, and the results achieve a resolution comparable to structured illumination microscopy (SIM) results. Also, we demonstrate that five raw images can satisfy the requirements of self-supervised training. This approach, PiMAE, shows wide applicability in synthetic data testing and real-world experiments. We expect its usage for the estimation of hidden mechanisms in various physical systems.

    PiMAE overview. PiMAE, a physics-informed masked autoencoder, is proposed to learn the imaging mechanism of an optical microscope.

    Figure 1.PiMAE overview. PiMAE, a physics-informed masked autoencoder, is proposed to learn the imaging mechanism of an optical microscope.

    2. METHOD

    Self-supervised learning leverages the inherent structure or patterns in data to learn meaningful representations. There are two main categories: contrastive learning [2124] and pretext task learning [2529]. Mask image modeling (MIM) [25,3033] is a pretext task-learning technique that randomly masks portions of an input image. Recently, MIM has been shown to learn transferable, robust, and generalized representations from visual images, improving performance in downstream computer vision tasks [34]. PiMAE is an MIM-based method that reconstructs raw images according to the imaging principle of optical microscopy, which is formulated by the convolution of the PSF and the emitters.

    A. PiMAE Model

    The PiMAE model (Fig. 1) consists of three key components: (1) a vision transformer-based (ViT) [35] encoder–decoder architecture with a mask layer to prevent trivial solutions while estimating emitters, (2) a convolutional neural network as a prior for PSF estimation [36], and (3) a microscopic imaging simulator that implements the imaging principle formulated by PSF and emitter convolution. Appendix A provides detailed information on the network architecture and the embedding of physical principles. PiMAE requires only a few raw images for training, which is attributed to the carefully designed loss function. The loss function consists of two parts: one measures the difference between the raw and the reconstruction images, including the mean of the absolute difference and the multiscale structure similarity; the other part is a constraint on the PSF, including the total variation loss measuring the PSF continuity and the offset distance of the PSF’s center of mass. Appendix B contains the expressions for the loss functions.

    B. Training

    The ViT-based encoder in PiMAE is pretrained on the COCO data set [37] to improve performance. The pretraining is based on self-supervised learning on a masked autoencoder that does not include a physical simulator module (see Ref. [8] for details). After pretraining, PiMAE loads the trained encoder parameters and undergoes self-supervised training using raw microscopic images. The input image size is 144×144 pixels, and we use the RAdam optimizer [38] for training with a learning rate of 104 and a batch size of 18. The training runs for 5×104 steps.

    Within PiMAE, the convolutional neural network shown in Fig. 1 is randomly initialized, takes a fixed random vector as input, and outputs the predicted PSF. Relevant details can be found in Appendix A. As PiMAE undergoes self-supervised training, the predicted PSF of the convolutional neural network (CNN) becomes more accurate and closer to the true PSF, as shown in Fig. 2. The experimental setup is shown in Fig. 3.

    PSF learning. The results demonstrate that PiMAE can successfully learn the PSF from raw images through the training process. (a) The figure displays the PSF of SPADE, including LG mode LG22 and HG mode HG22. The scale bar is 0.5 μm. (b) Out-of-focus (800 and 1400 nm) images under a wide-field microscope imaging setup, along with the in-focus (0 nm) image. The scale bar is 0.5 μm.

    Figure 2.PSF learning. The results demonstrate that PiMAE can successfully learn the PSF from raw images through the training process. (a) The figure displays the PSF of SPADE, including LG mode LG22 and HG mode HG22. The scale bar is 0.5 μm. (b) Out-of-focus (800 and 1400 nm) images under a wide-field microscope imaging setup, along with the in-focus (0 nm) image. The scale bar is 0.5 μm.

    Evaluation in synthetic data sets. (a) Results of estimated PSF and emitters from out-of-focus synthetic data. The scale bar is 0.5 μm. (b) NRMSE of the results of estimated PSF from out-of-focus synthetic data; (c) NRMSE of the results of estimated emitters from out-of-focus synthetic data. (d) Results of estimated PSF and emitters from synthetic data with HG mode and LG mode (HG/LG) as PSF. The scale bar is 0.5 μm. (e) NRMSE of the results of estimated PSF from HG/LG synthetic data; (f) NRMSE of the results of estimated emitters from HG/LG synthetic data. The noise scale in the above evaluations is noisestd/rawmean=0.5.

    Figure 3.Evaluation in synthetic data sets. (a) Results of estimated PSF and emitters from out-of-focus synthetic data. The scale bar is 0.5 μm. (b) NRMSE of the results of estimated PSF from out-of-focus synthetic data; (c) NRMSE of the results of estimated emitters from out-of-focus synthetic data. (d) Results of estimated PSF and emitters from synthetic data with HG mode and LG mode (HG/LG) as PSF. The scale bar is 0.5 μm. (e) NRMSE of the results of estimated PSF from HG/LG synthetic data; (f) NRMSE of the results of estimated emitters from HG/LG synthetic data. The noise scale in the above evaluations is noisestd/rawmean=0.5.

    C. Synthetic Data Design and Evaluation

    To evaluate PiMAE’s performance, synthetic data sets were designed considering the following factors: (1) PiMAE’s requirement for sparse emitter data, (2) the need for the emitter data without discrete points for more challenging PSF estimation tasks, (3) evaluation on standard Gaussian PSF and other challenging PSFs, (4) evaluation at various noise levels, and (5) evaluation at various emitter sparsity levels. Therefore, the Sketches data set [39] was chosen as the emitter, as described in Appendix D.1.A, and various commonly used PSFs were designed in Appendix D.2. The noise robustness is evaluated by adding noise to the raw images at different levels. Moreover, images with sparse lines of varying densities were generated as emitters to assess the impact of sparsity on PiMAE, as described in Appendix D.1.B.

    For each scenario, we sample 1000 images as the training set and 100 images as the test set. For PSF estimation, we use Deconvblind [8] as a benchmark. For emitter localization, we use the Richardson–Lucy algorithm [20] and DeepSTORM [6] as reference methods. The results are measured by NRMSE [see Appendix F for definition and Appendix J for multiscale structural similarity (MS-SSIM) results]. Note that for the Richardson–Lucy and DeepSTORM tests, the PSF is assumed a priori, while for PiMAE, the PSF is treated as unknown.

    D. Real-World Experiments

    We evaluate PiMAE’s performance in handling both standard and nonstandard PSF microscopy images in real-world experiments. Since the true emitter positions cannot be obtained, we use the BioSR [40] data set to evaluate PiMAE’s handling of standard PSF microscopy images and compare it with SIM. Then, we use our custom-made wide-field microscope to produce out-of-focus and distorted PSF microscopy images to analyze PiMAE’s performance in handling nonstandard PSF microscopy images.

    In the experiment of wide-field microscopic imaging of nitrogen vacancy (NV) color centers, a 532 nm (Coherent Vendi 10 single longitudinal mode laser) laser passes through a customized precision electronic timing shutter, which controls the duration of the laser beams flexibly. The laser is then expanded and sent to a polarization mode controller that consists of a polarizing film (LPVISE100-A) and a half-wave plate (Thorlabs WPH10ME-532). The extended laser is focused on the focal plane behind the objective lens (Olympus, UPLFLN100XO2PH) by a fused quartz lens with a focal length of 150 mm. The fluorescence signals are collected by a scientific complementary metal oxide semiconductor (sCMOS) camera (Hamamatsu, Orca Flash 4.0 v.3). We use a manual zoom lens (Nikon AF 70-300 mm, f/4-5.6G, focal length between 70 and 300 mm, and the field of view of 6.3) as a tube lens to continuously change the magnification of the microscopic system.

    3. RESULT

    A. PiMAE Achieves High Accuracy on Synthetic Data Sets

    Being out-of-focus is one of the most common factors that can affect the quality of microscope imaging. PiMAE is capable of addressing this issue, and we demonstrate this by simulating a range of wide-field microscopy PSFs with out-of-focus distances that vary from 0 to 1400 nm. We also add Gaussian noise with a scale of noisestd/rawmean=0.5 to raw images, where noisestd is the standard deviation of Gaussian noise [41] and rawmean is the mean value of the raw image. First, we evaluate the performance of estimated PSFs. Figure 3(a) shows the actual PSFs and those estimated by Deconvblind and PiMAE. The PiMAE estimated PSF is similar to the actual PSF for all out-of-focus distances, while most of Deconvblind’s estimated PSFs are far from the truth, indicating that Deconvblind cannot resolve raw images with complex PSFs. Furthermore, the estimated PSF by Deconvblind converges to the δ function after several iterations (see Appendix G). The NRMSE of the estimated PSFs at different out-of-focus distances is quantified in Fig. 3(b), with PiMAE achieving much better results than Deconvblind. Second, we evaluate the performance of estimated emitters. Figure 3(a) also shows the actual emitters and those estimated by the Richardson–Lucy algorithm, DeepSTORM (see Appendix H for implementation details), and PiMAE. When the out-of-focus distance is large, PiMAE and DeepSTORM significantly outperform the Richardson–Lucy algorithm. The NRMSE at different blur distances is shown in Fig. 3(c), where PiMAE achieves the best performance despite not knowing the actual PSF.

    Recently, researchers have found that imaging resolution can be improved using a spatial pattern sorter [3,19,42], a method called SPADE. Using SPADE for confocal microscopy is equivalent to using PSFs corresponding to spatial modes [3], such as Zernike modes, Hermite–Gaussian (HG) modes, and Laguerre–Gaussian (LG) modes. However, SPADE faces several challenges, including the need for an accurate determination of the spatial mode (i.e., the equivalent PSF), high sensitivity to noise, and a lack of reconstruction algorithms for complex spatial modes. PiMAE can solve these problems. Figures 3(d)–3(f) show the SPADE imaging results with noise scale noisestd/rawmean=0.5. PiMAE can accurately estimate the equivalent PSF and emitters, and the performance is much better than that of the Deconvblind, Richardson–Lucy algorithm, and DeepSTORM. Therefore, PiMAE can significantly improve the performance of SPADE. These experiments demonstrate that PiMAE is effective for scenarios with unknown and complex imaging PSFs.

    B. Noise Robustness

    Noise robustness is a crucial metric for evaluating reconstruction algorithms. We evaluate noise robustness in three scenarios: (1) in-focus wide-field microscopy; (2) wide-field microscopy at 600 nm out-of-focus distance; and (3) Laguerre–Gaussian mode LG22 SPADE imaging. The raw image of each scenario contains Gaussian noise (the speckle noise results are shown in Appendix I) at scales (noisestd/rawmean) of 0.01, 0.1, 0.5, 1, and 2, as shown in Fig. 4 (see Appendix J for MS-SSIM results). We first compare the results of Deconvblind and PiMAE for estimating PSF. We find that PiMAE shows excellent noise immunity, substantially outperforming Deconvblind in all tests. We then compare the results of the Richardson–Lucy algorithm, DeepSTORM, and PiMAE for estimating the emitters. Overall, PiMAE performs the best, only slightly behind DeepSTORM in the standard PSF scenario at low noise. The Richardson–Lucy algorithm performs similarly to DeepSTORM and PiMAE when the noise scale is very small. However, when the noise scale slightly increases, its performance significantly decreases. This shows the advantage of deep-learning-based methods over traditional algorithms in terms of noise robustness. Moreover, the advantage of PiMAE over the other two algorithms increases as the scale of the noise becomes larger and the shape of the PSF becomes more complex.

    Evaluation of noise robustness. (a) NRMSE of the results of estimated PSF from in-focus synthetic data; (b) NRMSE of the results of estimated emitters from in-focus synthetic data; (c) NRMSE of the results of estimated PSF from 600 nm out-of-focus synthetic data. (d) NRMSE of the results of estimated emitters from 600 nm out-of-focus synthetic data; (e) NRMSE of the results of estimated PSF from LG22 synthetic data; (f) NRMSE of the results of estimated emitters from LG22 synthetic data; the noise scale is noisestd/rawmean.

    Figure 4.Evaluation of noise robustness. (a) NRMSE of the results of estimated PSF from in-focus synthetic data; (b) NRMSE of the results of estimated emitters from in-focus synthetic data; (c) NRMSE of the results of estimated PSF from 600 nm out-of-focus synthetic data. (d) NRMSE of the results of estimated emitters from 600 nm out-of-focus synthetic data; (e) NRMSE of the results of estimated PSF from LG22 synthetic data; (f) NRMSE of the results of estimated emitters from LG22 synthetic data; the noise scale is noisestd/rawmean.

    C. PiMAE Enables Superresolution Imaging for Wide-Field Microscopy Comparable to SIM

    The endoplasmic reticulum (ER) is a system of tunnels surrounded by membranes in eukaryotic cells. In the data set BioSR [40], the researchers imaged the ER in the same field of view using wide-field microscopy and SIM, respectively. Figure 5(a) shows the results of PiMAE-resolved wide-field microscopy raw images (more images of the results are in Appendix K). We find that the resolution of the PiMAE-estimated emitter is comparable to that of SIM, which has a resolution twice that of the diffraction limit. Figure 5(b) shows the cross-sectional results, where the peak positions of the PiMAE-estimated emitter match the peak positions of the SIM results, corresponding to indistinguishable wide-field imaging results. This indicates that the resolvability of the results of wide-field microscopy with PiMAE-estimated emitters is improved to a level similar to that of SIM. Figure 5(c) shows the results of the PiMAE-estimated PSF with FWHM of 230 nm. The fluorescence wavelength of the raw image is 488 nm, the numerical aperture (NA) is 1.3, and its diffraction limit is 0.61×λNA=0.61×488  nm1.3229  nm, which is very close to the FWHM of the PiMAE-estimated PSF. This experiment shows that PiMAE can be applied to real-world experiments to estimate PSF from raw microscopy data and further improve resolution.

    Superresolution imaging of ER. (a) The figures are the raw image of wide-field microscopic imaging of ER, the result of estimating the emitter from wide-field microscopic imaging using PiMAE, the result of SIM of the same field of view, and the result of wide-field microscopic imaging reconstructed by PiMAE. Data from BioSR data set [40]. The scale bar is 2.50 μm. (b) Comparison of the cross section of the PiMAE estimated emitters and SIM results; it shows that the resolution of the results obtained by PiMAE is comparable to that of SIM. (c) PiMAE estimated wide-field microscope PSF with an FWHM of 230 nm, where the diffraction limit is 229 nm.

    Figure 5.Superresolution imaging of ER. (a) The figures are the raw image of wide-field microscopic imaging of ER, the result of estimating the emitter from wide-field microscopic imaging using PiMAE, the result of SIM of the same field of view, and the result of wide-field microscopic imaging reconstructed by PiMAE. Data from BioSR data set [40]. The scale bar is 2.50 μm. (b) Comparison of the cross section of the PiMAE estimated emitters and SIM results; it shows that the resolution of the results obtained by PiMAE is comparable to that of SIM. (c) PiMAE estimated wide-field microscope PSF with an FWHM of 230 nm, where the diffraction limit is 229 nm.

    D. PiMAE Enables Imaging for Nonstandard Wide-Field Microscopy

    The NV color center is a point defect in diamond that is widely used in superresolution microscopy [5,43] and quantum sensing [44,45]. We make a home-built wide field microscope to image the NV center in fluorescent nanodiamonds (FNDs) at out-of-focus distances of 0, 400, and 800 nm. We take 10 raw images with a size of 2048 pixels and a field-of-view size of 81.92 μm at each out-of-focus distance. Figure 6(a) shows that we image NV color centers in the same field of view at different out-of-focus distances, and Fig. 6(b) shows the corresponding PiMAE-estimated emitters. This is a side-by-side demonstration of the accuracy of the PiMAE-estimated emitters. The out-of-focus distance changes during the experiment, but the field of view is invariant. Therefore, the PiMAE-estimated emitter position should be constant at each out-of-focus distance, as we observe in Figs. 6(b) and 6(c). Figure 6(d) shows the variation of the PSF. The asymmetry of the PSF comes from the slight tilt of the carrier stage. Also, we show the PSF cross section for each scene. The FWHM of the estimated PSF at focus is 382 nm, which corresponds to a diffraction limit of 384 nm. This suggests that PiMAE can be applied in real-world experiments to improve the imaging capabilities of microscopes suffering from out-of-focus.

    Wide-field microscopy imaging of NV color centers. (a)–(d) Results of wide-field microscopy imaging of NV color centers at different out-of-focus distances; (a) raw images; the scale bar is 1.25 μm. (b) PiMAE estimated emitters; (c) comparison of the cross section of the raw images and the PiMAE-estimated emitters, where the black dashed line represents the raw images and the yellow solid line represents the PiMAE-estimated emitters; the peak positions of the PiMAE-estimated emitter results are constant for different out-of-focus distances, as seen from the blue dashed line. (d) PiMAE estimated PSF; FWHM of in-focus PSF is 382 nm, where the diffraction limit is 384 nm; the larger the out-of-focus distance, the larger the paraflap of the PSF, despite the decrease of the FWHM in the center region. (e) Comparison of nonstandard microscopic imaging and PiMAE estimated emitters. The scale bar is 3.2 μm. (f) Cross section of the nonstandard microscopic imaging and PiMAE estimated emitters; (g) PiMAE-estimated nonstandard microscopy PSF.

    Figure 6.Wide-field microscopy imaging of NV color centers. (a)–(d) Results of wide-field microscopy imaging of NV color centers at different out-of-focus distances; (a) raw images; the scale bar is 1.25 μm. (b) PiMAE estimated emitters; (c) comparison of the cross section of the raw images and the PiMAE-estimated emitters, where the black dashed line represents the raw images and the yellow solid line represents the PiMAE-estimated emitters; the peak positions of the PiMAE-estimated emitter results are constant for different out-of-focus distances, as seen from the blue dashed line. (d) PiMAE estimated PSF; FWHM of in-focus PSF is 382 nm, where the diffraction limit is 384 nm; the larger the out-of-focus distance, the larger the paraflap of the PSF, despite the decrease of the FWHM in the center region. (e) Comparison of nonstandard microscopic imaging and PiMAE estimated emitters. The scale bar is 3.2 μm. (f) Cross section of the nonstandard microscopic imaging and PiMAE estimated emitters; (g) PiMAE-estimated nonstandard microscopy PSF.

    Moreover, we construct a nonstandard PSF for wide-field microscopic imaging of NV color centers by making the objective mismatch with the coverslip (see Appendix K.2); the results are shown in Figs. 6(e)–6(g). Figure 6(e) shows the imaging results and PiMAE-estimated emitters. Figure 6(f) shows the results of the cross-sectional comparison. Figure 6(g) shows the PiMAE-estimated PSF. This experiment demonstrates that PiMAE enables researchers to use microscopy with nonstandard PSFs for imaging. And PiMAE’s ability to resolve nonstandard PSFs expands the application scenarios of NV color centers in fields such as quantum sensing and bioimaging.

    E. PiMAE Enables Microscopy Imaging with Widely Spread Out PSFs

    Further testing the capabilities of PiMAE, we evaluate the performance of PiMAE on complex widely spread out PSFs, represented by the character “USTC.” We use 1000 images as the training set and 100 images as the test set. The noise level is set at noisestd/rawmean=0.01. The results of the raw images, the PiMAE processed images, and the evaluation of the NRMSE metric are depicted in Fig. 7. PiMAE performed exceptionally well, demonstrating its effectiveness in difficult scenarios.

    Evaluation using synthetic data based on PSF of the shape “USTC.” (a) Comparison of the raw image, the PiMAE estimated emitters, and the actual emitters; the scale bar is 0.5 μm. (b) Comparison of the actual PSF and the PiMAE-estimated PSF; the scale bar is 0.5 μm. (c) NRMSE of the estimated PSF; (d) NRMSE of the estimated emitters.

    Figure 7.Evaluation using synthetic data based on PSF of the shape “USTC.” (a) Comparison of the raw image, the PiMAE estimated emitters, and the actual emitters; the scale bar is 0.5 μm. (b) Comparison of the actual PSF and the PiMAE-estimated PSF; the scale bar is 0.5 μm. (c) NRMSE of the estimated PSF; (d) NRMSE of the estimated emitters.

    F. Evaluation of the Influence of Emitter Sparsity

    Dense samples can pose challenges for estimating both the PSF and the emitters. We designed emitters with varying densities, as outlined in Appendix D.1.B, and employed LG22 as the PSF. As shown in Fig. 8, we observe that as the number of lines in each image (512×512) increases, PiMAE’s performance in estimating both the PSF and emitters deteriorates. Intuitively, when the number of lines in each image is less than or equal to 50, PiMAE performs well, while performance is poor when the number of lines is greater than 50. This process allows us to evaluate the influence of emitter sparsity on PiMAE.

    Influence of emitter sparsity. (a) Comparison of the raw image, the PiMAE estimated emitters, and the actual emitters, and comparison of the actual PSF and the PiMAE-estimated PSF; N refers to the number of sparse lines. The scale bar is 1.0 μm. (b) NRMSE of the estimated emitters; (c) NRMSE of the estimated PSF.

    Figure 8.Influence of emitter sparsity. (a) Comparison of the raw image, the PiMAE estimated emitters, and the actual emitters, and comparison of the actual PSF and the PiMAE-estimated PSF; N refers to the number of sparse lines. The scale bar is 1.0 μm. (b) NRMSE of the estimated emitters; (c) NRMSE of the estimated PSF.

    G. Computational Resource and Speed

    In this work, the code is based on the Python library PyTorch, as we show in Code File 1 [46]. PyTorch is a prominent open-source deep-learning framework that offers an efficient and user-friendly platform for building and deploying deep-learning models. In terms of model training, we utilize three Nvidia Tesla A100 40 GB graphics cards in parallel, which is necessary due to ViT’s substantial computational and memory requirements. The training time for each task is 11 h, and the inference time for a single 512×512 image is approximately 4 s with the trained model. Compared to supervised models such as DeepSTORM, which takes about 1 h for training and 0.1 s for inference, PiMAE is slower but more powerful. As for the data set size requirement, we show in Appendix E that PiMAE achieves good training results, even with a minimum of five images in the training set.

    4. DISCUSSION

    In this study, we introduce PiMAE, a novel approach for estimating PSF and emitters directly from raw microscopy images. PiMAE addresses several challenges: it allows for direct identification of the PSF from raw data, enabling deep-learning model training without the need for real-world or synthetic annotation; it has excellent noise resistance; and it is convenient and widely applicable, requiring only about five raw images to resolve the PSF and emitters.

    Our method, PiMAE, extracts hidden variables from raw data using physical knowledge. By recognizing PSF as a hidden variable in a linear optical system, the underlying physical principle involves the decomposition of raw data through the convolution of the emitters with the PSF. Hidden variables are ubiquitous in real-world experiments, by integrating masked autoencoder and physical knowledge, PiMAE provides a framework to solve hidden variables in physical systems through self-supervised learning.

    However, it should be noted that PiMAE is an emitter localization algorithm, which means that it requires a sufficient degree of sample sparsity to perform effectively. We conducted an evaluation using synthetic data experiments, and while PiMAE performed reasonably well, there is still room for improvement. There is ambiguity in extracting the PSF and emitters directly from the raw images, so PiMAE opts for a simpler emitter distribution to learn the real PSF, which might result in artifacts. As PiMAE supplies the PSF needed for RL-deconv and DeepSTORM, potential solutions may be to integrate PiMAE with the aforementioned methods or to perform unmasked self-supervised training after masked self-supervised training within PiMAE. Therefore, future work could focus on further enhancing the robustness of PiMAE for use in dense scenarios.

    5. CONCLUSION

    In conclusion, we have presented PiMAE, a novel solution for directly extracting the PSF and emitters from raw optical microscopy images. By combining the principles of optical microscopy with self-supervised learning, PiMAE demonstrates impressive accuracy and noise robustness in synthetic data experiments, outperforming existing methods such as DeepSTORM and the Richardson–Lucy algorithm. Appendix L shows the full range of synthetic data evaluation metrics. Moreover, our method has been successfully applied to real-world microscopy experiments, resolving wide-field microscopy images with various PSFs. With its ability to learn the hidden mechanisms from raw data, PiMAE has a wide range of potential applications in optical microscopy and scientific studies.

    Acknowledgment

    Acknowledgment. The authors would like to thank Drs. Yu Zheng, Yang Dong, Ce Feng, and Shao-Chun Zhang for fruitful discussions.

    APPENDIX A: NETWORK ARCHITECTURE

    The principle of microscopic imaging is rawimage=noise(emittersPSF)+background,where the raw image is the result of convolving the emitters and the PSF with the presence of noise and background. To put this principle into practice, we have developed the PiMAE method, which consists of three modules: emitter inference from raw images, PSF generation, and background separation.

    Emitter Inference

    We have improved the original masked autoencoder for use in microscopic imaging by integrating a voting head into its transformer-based decoder. The head predicts the position and intensity of emitters, respectively. Specifically, the decoder produces 9×9 feature patches, which serve as the input for the voting head. For the emitter position, the voting head employs a two-step process: (1) a multilayer perceptron (MLP) predicts 64 density maps from each feature patch, and (2) the emitter positions are obtained by computing the center of mass of each density map. For emitter intensity, an MLP predicts 64 intensities. The predicted emitter image is generated by placing a Gaussian-type point tensor with σ=1 scaled by its corresponding intensity at the predicted position, similar to the design in crowd-counting methods [47]. The mask layer is an essential element in the design of a masked autoencoder. Its main function is to prevent the model from learning trivial solutions and instead encourage it to focus on the relevant features of the input data. This is achieved by randomly blocking out specific parts of the input tensor. To improve the training efficiency, we introduced a CNN stem consisting of four convolutional layers placed before the mask layer [48]. The input image size of 144×144 is reduced to 9×9 after the CNN stem, with each pixel encoding a 384-dimensional vector. We refer to this model as the point predictor, as shown in Fig. 9.

    Network architecture. PiMAE consists of two predictors, namely, a PSF predictor and a point predictor. The point predictor outputs the location and intensity of the points.

    Figure 9.Network architecture. PiMAE consists of two predictors, namely, a PSF predictor and a point predictor. The point predictor outputs the location and intensity of the points.

    PSF Generation

    Motivated by the observation that a CNN can function as a well-designed prior and deliver outstanding results in typical inverse problems, as evidenced by Deep Image Prior [36], we constructed the PSF generator, as illustrated in Fig. 9. The neural network’s parameters are adjusted through self-supervised learning to produce the PSF, with a random matrix as the input, which remains constant throughout the learning process.

    Background Separation

    To isolate the background component from the raw image, we employ a new point predictor (Fig. 9). We assume that the background has a low spatial variability and approximate it by drawing the output points from the point predictor following a Gaussian distribution with σ=16.

    APPENDIX B: DESIGN OF LOSS FUNCTION

    The loss function in our approach is composed of four components, divided into two categories.

    The first category measures the similarity between the reconstructed image and the raw image. It consists of the mean absolute difference (L1) and the MS-SSIM, as expressed in Eq. (F2). The combination of these two functions has been demonstrated to perform better than individual functions such as L1 and mean squared error (MSE) in image restoration tasks [49].

    The second category concerns the constraint on the generated PSF. To ensure that the center of mass of the generated PSF is at the center of the PSF tensor, we calculate the center distance loss as follows: Centerdistanceloss=|i,jIntensityij·Coordinateiji,jIntensityijCenterposition|.

    Additionally, to ensure that the generated PSF is spatially continuous, we use the total variation (TV) loss to quantify the smoothness of the image, TV loss=i,j(Intensityi,j1Intensityi,j)2+(Intensityi+1,jIntensityi,j)2.

    Finally, the loss function is defined as Lossfunction=α1L1+α2MSSSIM+α3Centerdistance+α4TV,where α1=0.95, α2=0.05, α3=0.001, and α4=0.001.

    APPENDIX C: PRETRAINING WITH COCO DATA SET

    Recent research has shown that self-supervised pretraining is effective in improving accuracy and robustness in computer vision tasks. In this study, we employed a masked autoencoder [shown in Fig. 10(b)] to pretrain the encoder of PiMAE on the COCO data set [37] (unlabeled), a large-scale data set containing 330,000 RGB images of varying sizes for object detection, segmentation, and captioning tasks.

    Pretraining with COCO data set.

    Figure 10.Pretraining with COCO data set.

    Example results on COCO. We show the masked image, MAE reconstruction, and the ground truth. The masking ratio here is 0.75.

    Figure 11.Example results on COCO. We show the masked image, MAE reconstruction, and the ground truth. The masking ratio here is 0.75.

    Pretraining enhancements. Comparison of NRMSE metrics for emitter localization of pretrained and non-pretrained models. Using 600 nm out-of-focus data as an example, after 500 rounds of training, the learning rate is 3×10−4.

    Figure 12.Pretraining enhancements. Comparison of NRMSE metrics for emitter localization of pretrained and non-pretrained models. Using 600 nm out-of-focus data as an example, after 500 rounds of training, the learning rate is 3×104.

    APPENDIX D: SYNTHETIC DATA GENERATION

    In this section, we present the construction method of the synthetic data used to evaluate PiMAE, including emitters and PSFs.

    EmittersSketches

    Sketches data set [39] is a large-scale exploration of human sketches containing a wide variety of morphologies. To evaluate the performance of the method, emitters of synthetic data are sampled from the Sketches data set. Figure 13 illustrates examples from the Sketches data set.

    Sketches data set examples.

    Figure 13.Sketches data set examples.

    Randomly generated lines.

    Figure 14.Randomly generated lines.

    PSFsOut-of-Focus

    We simulate the imaging results of a wide-field microscope when the sample is out of focus. The near-focus amplitude can be described using the scalar Debye integral [50], h(x,y,z;λ)=C00αcosθJ0(kρsinθ)eikzcosθsinθdθ,where C0 is a complex constant, J0 is the zeroth-order Bessel function of the first kind, ρ=x2+y2, the refractive index is n, the numerical aperture NA=nsinα, and the wavenumber k=n(2π/λ). The PSF of the wide-field microscopy is PSF(x,y,z)=|h(x,y,z;λem)|2.

    The values of the parameters in this experiment are C0=1, n=1, λem=400  nm, NA=0.7, and each pixel has a size of 39 nm. λem represents the fluorescence emission wavelength.

    SPADE

    We simulated four scenarios in the SPADE, corresponding to PSFs as Hermite–Gaussian modes HG22, HG31, and Laguerre–Gaussian modes LG11,  LG22, respectively. Here we set the wavelength to 500 nm, the PSF size to 51  ×51  pixels and 15  mm×15  mm range, and rescaled to a 39 nm pixel size. The definitions for the amplitude of the Hermite–Gaussian modes and Laguerre–Gaussian modes are [51] unmHG(x,y,z)=CnmHG(1/w)exp[ik(x2+y2/2R)]×exp[(x2+y2/w2)]exp[i(n+m+1)ψ]×Hn(x2/w)Hm(y2/w),unmLG(r,ϕ,z)=CnmLG(1/w)exp(ikr2/2R)exp(r2/w2)×exp[i(n+m+1)ψ]exp[i(nm)ϕ]×(1)min(n,m)(r2/w)|rm|×Lmin(n,m)|nm|(2r2/w2),with R(z)=(zR2+z2)/zR, 12kw2(z)=(zR2+z2)/zR, and ψ(z)=arctan(z/zR). Hn(x) is the Hermite polynomial of order n, Lpl(x) is the generalized Laguerre polynomial, k=2πλ is the wavenumber, and zR is the Rayleigh range of the mode. Here we set w0=2  mm, wavelength λ=500  nm, and z=0.

    APPENDIX E: TRAINING SET SIZE

    We use LG22 as the PSF and a fixed test set size of 100 images with a shape of 512×512. The training set sizes for both PiMAE and DeepSTORM are 1, 5, 10, and 1000 images, respectively. As shown in Fig. 15, PiMAE performs well, even with a training set size as small as five images, whereas the performance of DeepSTORM decreases significantly.

    Evaluating the effect of training set size. (a) Results of estimated PSF and emitters when the size N of the training set is 1, 5, 10, and 1000, and the size of the test set is 100; the scale bar is 1.0 μm. (b) NRMSE of the estimated emitters from synthetic data with different data set sizes; (c) NRMSE of PiMAE-estimated PSF from synthetic data with different data set sizes.

    Figure 15.Evaluating the effect of training set size. (a) Results of estimated PSF and emitters when the size N of the training set is 1, 5, 10, and 1000, and the size of the test set is 100; the scale bar is 1.0 μm. (b) NRMSE of the estimated emitters from synthetic data with different data set sizes; (c) NRMSE of PiMAE-estimated PSF from synthetic data with different data set sizes.

    APPENDIX F: ASSESSMENT METRICS

    When evaluating the performance of emitter estimation, we use two metrics: the NRMSE and the MS-SSIM. NRMSE provides a quantitative measure of the difference between two images, while MS-SSIM is designed to assess the perceived similarity of images, taking into consideration the recognition of emitters by the human eye [52].

    NRMSE is defined as NRMSE=i,j(ImagetrueImagetest)2Max(Imagetrue)Min(Imagetrue).

    MS-SSIM is defined as MS-SSIM(x,y)=[lm(x,y)]αm·j=1M[cj(x,y)]βj[sj(x,y)]γj,where the exponents αm, βj, and γj are used to adjust the relative importance of different components. Here αm=βj=γj and values are 0.0448, 0.2856, 0.3001, 0.2363, 0.1333 for j=1,  2,  3,  4,  5. The expressions of the exponents lm, cj, and sj are the same as single-scale structural similarity at each scale j, l(x,y)=2μxμy+C1μx2+μy2+C1,c(x,y)=2σxσy+C2σx2+σy2+C2,s(x,y)=σxy+C3σxσy+C3,where C1=(K1L)2, C2=(K2L)2, and C3=C2/2; here L=255, C1=C2=0, K1=0.01, and K2=0.03. The sliding window size is 11.

    In the assessment, we use the max-min normalization method to process each image as follows: xnorm=xxminxmaxxmin,where xnorm is the normalized image, x is the raw image, xmin is the minimum value in the image, and xmax is the maximum value in the image.

    APPENDIX G: DECONVBLIND

    The Deconvblind is one of the most popular methods for blind deconvolution, which iteratively updates the PSF and the estimated image. For each task, we used the training set consisting of 1000 images and applied the Deconvblind function in MATLAB [53] to estimate the PSF. These 1000 images were provided to Deconvblind in the form of a stack.

    We demonstrate that the Deconvblind approach leads to a trivial solution, i.e., a δ function, for estimating the PSF. We evaluate the performance of Deconvblind and PiMAE on 1000 synthetic images generated from the Sketches data set, where the PSF is generated from a wide-field microscope in focus. As shown in Fig. 16, the PSF estimated by Deconvblind converges to a δ function, which is a trivial solution and results in the estimated emitter image being equal to the raw image. In contrast, the PiMAE-estimated PSF steadily approaches the actual PSF as the number of training epochs increases.

    Iterative optimization in Deconvblind. The PSF estimated by Deconvblind converges to a δ function. The scale bar is 0.5 μm.

    Figure 16.Iterative optimization in Deconvblind. The PSF estimated by Deconvblind converges to a δ function. The scale bar is 0.5 μm.

    APPENDIX H: DeepSTORM

    We compare the performance of PiMAE with other deep-learning-based methods, such as DeepSTORM, DECODE, and those that train neural networks for predicting emitter locations using supervised learning. As a baseline for comparison, we reproduce the DeepSTORM method. The original DeepSTORM model is a fully convolutional neural network (FCN), which we upgrade to the U-net architecture [7,54,55], a powerful deep-learning architecture that has shown superior performance in various computer vision tasks (see Fig. 17). While incorporating this change, we ensure to adhere to the original DeepSTORM model’s design and use the sum of MSE and L1 loss as the loss function.

    Network architecture. (a) Original DeepSTORM architecture; (b) modified DeepSTORM architecture.

    Figure 17.Network architecture. (a) Original DeepSTORM architecture; (b) modified DeepSTORM architecture.

    During the training process, we use 1000 images containing randomly positioned emitters simulated using the ImageJ [56] ThunderSTORM [57] plugin. These images are convolved with the PSF of the task, normalized using the mean and averaged standard deviation, and then noise with an intensity of 105 is added to enhance robustness.

    APPENDIX I: EVALUATION RESULTS OF ADDING SPECKLE NOISE TO SYNTHETIC DATA

    Speckle noise is a type of granular noise texture that can degrade image quality in coherent imaging systems such as medical ultrasound, optical coherence tomography, as well as radar and synthetic aperture radar (SAR) systems. It is a multiplicative noise that is proportional to the image intensity. The probability density function of speckle noise can be described by an exponential distribution, p(z)=1σ2exp(zσ2).

    Here, z represents the intensity, and σ2 represents the speckle noise variance. To evaluate the impact of speckle noise on estimating PSF and emitters, we use LG22 as the PSF and Sketches as the emitters. We construct three sets of data with noise variances of 0.1, 1, and 2, respectively, each containing 1000 training images and 100 test images. We use the NRMSE metric to evaluate the results, as shown in Fig. 18.

    Evaluation of speckle noise robustness. (a) The estimated PSF and emitters result from synthetic data with speckle noise. The scale bar is 0.5 μm. (b) NRMSE of estimated emitters from synthetic data with speckle noise; (c) NRMSE of estimated PSF from synthetic data with speckle noise.

    Figure 18.Evaluation of speckle noise robustness. (a) The estimated PSF and emitters result from synthetic data with speckle noise. The scale bar is 0.5 μm. (b) NRMSE of estimated emitters from synthetic data with speckle noise; (c) NRMSE of estimated PSF from synthetic data with speckle noise.

    APPENDIX J: THE RESULTS USING MS-SSIM AS THE METRIC

    Results of Out-of-Focus Synthetic Data

    In this section, we present the results of synthetic data with varying out-of-focus distances, assessed using the MS-SSIM metric. Gaussian noise with a standard deviation of noisestd/rawmean=0.5 is added to each synthetic data set. The results are displayed in Fig. 19.

    MS-SSIM of the results of estimated emitters from out-of-focus synthetic data.

    Figure 19.MS-SSIM of the results of estimated emitters from out-of-focus synthetic data.

    MS-SSIM of the results of estimated emitters from the SPADE Sketches data set.

    Figure 20.MS-SSIM of the results of estimated emitters from the SPADE Sketches data set.

    Noise robustness. (a) MS-SSIM of the results of estimated emitters from the in-focus Sketches data set; (b) MS-SSIM of the results of estimated emitters from the 600 nm out-of-focus Sketches data set; (c) MS-SSIM of the results of estimated emitters from the LG22 mode Sketches data set.

    Figure 21.Noise robustness. (a) MS-SSIM of the results of estimated emitters from the in-focus Sketches data set; (b) MS-SSIM of the results of estimated emitters from the 600 nm out-of-focus Sketches data set; (c) MS-SSIM of the results of estimated emitters from the LG22 mode Sketches data set.

    APPENDIX K: RESULTS OF REAL-WORLD EXPERIMENTS

    We evaluate PiMAE in two real-world experiments. First, we utilize the imaging results of ER structures obtained from both wide-field microscopy and SIM from the BioSR data set [40]. Second, we construct a custom-built wide-field microscope to image NV color centers in diamond. The ability of PiMAE to handle non-Gaussian PSFs is evaluated in both out-of-focus and aberrations scenarios.

    Results of ER

    Figure 22 shows the results of wide-field microscopy, SIM, and PiMAE-resolved wide-field microscopy of ER. Figure 23 demonstrates that PiMAE is capable of avoiding the artifact phenomenon seen in SIM.

    Comparison of ER imaging results. Here we show some comparative results of wide-field microscopy, SIM, and PiMAE-resolved wide-field microscopy. The length of the scale bar is 2.50 μm. Data from BioSR data set [40].

    Figure 22.Comparison of ER imaging results. Here we show some comparative results of wide-field microscopy, SIM, and PiMAE-resolved wide-field microscopy. The length of the scale bar is 2.50 μm. Data from BioSR data set [40].

    Artifacts in superresolution images reconstructed using SIM. Reconstruction artifacts are a common issue in SIM-reconstructed images, as evidenced in (c) and (d), due to factors such as nonuniform fringe patterns or phase errors in the reconstruction process. In comparison, the PiMAE-estimated emitters do not exhibit these artifact problems. The scale bar is 1.00 μm.

    Figure 23.Artifacts in superresolution images reconstructed using SIM. Reconstruction artifacts are a common issue in SIM-reconstructed images, as evidenced in (c) and (d), due to factors such as nonuniform fringe patterns or phase errors in the reconstruction process. In comparison, the PiMAE-estimated emitters do not exhibit these artifact problems. The scale bar is 1.00 μm.

    Wide-field microscopy imaging of NV color centers. (a) Comparison of wide-field microscopy results and PiMAE estimated emitters results at different out-of-focus distances, with invariant field of view from top to bottom, and different field of view on the left and right, respectively; the scale bar is 2.50 μm. (b) Wide-field microscopy results and PiMAE estimated emitters of nonstandard PSF when the objective is mismatched to the coverslip; the scale bar is 6.40 μm.

    Figure 24.Wide-field microscopy imaging of NV color centers. (a) Comparison of wide-field microscopy results and PiMAE estimated emitters results at different out-of-focus distances, with invariant field of view from top to bottom, and different field of view on the left and right, respectively; the scale bar is 2.50 μm. (b) Wide-field microscopy results and PiMAE estimated emitters of nonstandard PSF when the objective is mismatched to the coverslip; the scale bar is 6.40 μm.

    APPENDIX L: SUMMARY OF RESULTS

    In this section, we summarize the results of all the synthetic tasks in Table 1.

    Summary of Synthetic Data Experimentsa

    Synthetic Data
    Task InfoNRMSE for EmittersNRMSE for PSF
    TaskPSFEmittersNoisePiMAEDeepSTORMRLPiMAEDB
    11400 nmSketches0.50.0900.1110.2570.0700.195
    21200 nmSketches0.50.0900.1060.2380.0750.144
    31000 nmSketches0.50.0930.1100.2320.0830.098
    4800 nmSketches0.50.0800.1030.2010.0290.062
    5600 nmSketches0.50.0730.0920.1630.0180.059
    6400 nmSketches0.50.0740.0810.1400.0180.051
    7200 nmSketches0.50.0720.0780.1300.0230.048
    80 nmSketches0.50.0710.0840.1240.0220.045
    90 nmSketches20.0890.1390.1980.0450.078
    100 nmSketches10.0850.1050.1560.0310.064
    110 nmSketches0.50.0710.0790.1240.0220.045
    120 nmSketches0.10.0680.0660.0910.0210.042
    130 nmSketches0.010.0680.0650.0820.0210.165
    14600 nmSketches20.0950.1440.2310.0190.076
    15600 nmSketches10.0910.1110.1850.0160.070
    16600 nmSketches0.50.0730.0920.1630.0180.058
    17600 nmSketches0.10.0660.0730.1420.0230.030
    18600 nmSketches0.010.0680.0700.1350.0230.937
    19HG22Sketches0.50.0750.0980.1510.0280.156
    20HG31Sketches0.50.0720.0970.1470.0290.161
    21LG11Sketches0.50.0720.0980.1540.0160.088
    22LG22Sketches0.50.0730.0940.1790.0420.042
    23LG22Sketches20.1000.1280.3070.0690.105
    24LG22Sketches10.0780.1040.2350.0480.100
    25LG22Sketches0.50.0630.0940.1790.0290.098
    26LG22Sketches0.10.0560.0820.1170.0170.095
    27LG22Sketches0.010.0610.0800.1050.0222.761
    28LG22Lines/n=100.010.0400.0490.1530.0280.352
    29LG22Lines/n=200.010.0580.0740.1930.0370.156
    30LG22Lines/n=500.010.0960.1190.2130.0590.102
    31LG22Lines/n=1000.010.1580.1710.2160.1300.103
    32LG22Sketches/speckle noise20.0850.1550.1280.0260.309
    33LG22Sketches/speckle noise10.0780.1280.1290.0430.896
    34LG22Sketches/speckle noise0.10.0750.0840.1100.0572.871
    35USTCSketches0.010.0860.1140.1600.1350.187

    The training set consists of 1000 images, and the test set consists of 100 images.

    References

    [1] S.-H. Lee, J. Y. Shin, A. Lee. Counting single photoactivatable fluorescent molecules by photoactivated localization microscopy (PALM). Proc. Natl. Acad. Sci. USA, 109, 17436-17441(2012).

    [2] M. J. Rust, M. Bates, X. Zhuang. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods, 3, 793-796(2006).

    [3] K. K. Bearne, Y. Zhou, B. Braverman. Confocal super-resolution microscopy based on a spatial mode sorter. Opt. Express, 29, 11784-11792(2021).

    [4] S. W. Hell, J. Wichmann. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett., 19, 780-782(1994).

    [5] X. Chen, C. Zou, Z. Gong. Subdiffraction optical manipulation of the charge state of nitrogen vacancy center in diamond. Light Sci. Appl., 4, e230(2015).

    [6] E. Nehme, L. E. Weiss, T. Michaeli. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica, 5, 458-464(2018).

    [7] A. Speiser, L.-R. Müller, P. Hoess. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods, 18, 1082-1090(2021).

    [8] D. S. Biggs, M. Andrews. Acceleration of iterative image restoration algorithms. Appl. Opt., 36, 1766-1775(1997).

    [9] T. F. Chan, C.-K. Wong. Total variation blind deconvolution. IEEE Trans. Med. Imaging, 7, 370-375(1998).

    [10] D. Krishnan, T. Tay, R. Fergus. Blind deconvolution using a normalized sparsity measure. Conference on Computer Vision and Pattern Recognition (CVPR), 233-240(2011).

    [11] G. Liu, S. Chang, Y. Ma. Blind image deblurring using spectral properties of convolution operators. IEEE Trans. Med. Imaging, 23, 5047-5056(2014).

    [12] T. Michaeli, M. Irani. Blind deblurring using internal patch recurrence. European Conference on Computer Vision, 783-798(2014).

    [13] J. Pan, D. Sun, H. Pfister. Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell., 40, 2315-2328(2017).

    [14] J. Pan, Z. Hu, Z. Su. L0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 39, 342-355(2016).

    [15] W. Ren, X. Cao, J. Pan. Image deblurring via enhanced low-rank prior. IEEE Trans. Med. Imaging, 25, 3426-3437(2016).

    [16] L. Sun, S. Cho, J. Wang. Edge-based blur kernel estimation using patch priors. IEEE International Conference on Computational Photography (ICCP), 1-8(2013).

    [17] Y. Yan, W. Ren, Y. Guo. Image deblurring via extreme channels prior. IEEE Conference on Computer Vision and Pattern Recognition, 4003-4011(2017).

    [18] W. Zuo, D. Ren, D. Zhang. Learning iteration-wise generalized shrinkage–thresholding operators for blind deconvolution. IEEE Trans. Med. Imaging, 25, 1751-1764(2016).

    [19] A. Shajkofci, M. Liebling. Spatially-variant CNN-based point spread function estimation for blind deconvolution and depth estimation in optical microscopy. IEEE Trans. Med. Imaging, 29, 5848-5861(2020).

    [20] L. B. Lucy. An iterative technique for the rectification of observed distributions. Astrophys. J., 79, 745(1974).

    [21] A. van den Oord, Y. Li, O. Vinyals. Representation learning with contrastive predictive coding. arXiv(2018).

    [22] Z. Wu, Y. Xiong, S. X. Yu. Unsupervised feature learning via non-parametric instance discrimination. IEEE Conference on Computer Vision and Pattern Recognition, 3733-3742(2018).

    [23] K. He, H. Fan, Y. Wu. Momentum contrast for unsupervised visual representation learning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729-9738(2020).

    [24] T. Chen, S. Kornblith, M. Norouzi. A simple framework for contrastive learning of visual representations. International Conference on Machine Learning (PMLR), 1597-1607(2020).

    [25] C. Doersch, A. Gupta, A. A. Efros. Unsupervised visual representation learning by context prediction. IEEE International Conference on Computer Vision, 1422-1430(2015).

    [26] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller. Discriminative unsupervised feature learning with convolutional neural networks. arXiv(2014).

    [27] J. Devlin, M.-W. Chang, K. Lee. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv(2018).

    [28] T. Chen, S. Liu, S. Chang. Adversarial robustness: from self-supervised pre-training to fine-tuning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 699-708(2020).

    [29] X. Chen, W. Chen, T. Chen. Self-PU: self boosted and calibrated positive-unlabeled training. International Conference on Machine Learning (PMLR), 1510-1519(2020).

    [30] M. Chen, A. Radford, R. Child. Generative pretraining from pixels. International Conference on Machine Learning (PMLR), 1691-1703(2020).

    [31] O. Henaff. Data-efficient image recognition with contrastive predictive coding. International Conference on Machine Learning (PMLR), 4182-4192(2020).

    [32] D. Pathak, P. Krahenbuhl, J. Donahue. Context encoders: feature learning by inpainting. IEEE Conference on Computer Vision and Pattern Recognition, 2536-2544(2016).

    [33] T. H. Trinh, M.-T. Luong, Q. V. Le. Selfie: self-supervised pretraining for image embedding. arXiv(2019).

    [34] K. He, X. Chen, S. Xie. Masked autoencoders are scalable vision learners. arXiv(2021).

    [35] A. Dosovitskiy, L. Beyer, A. Kolesnikov. An image is worth 16 × 16 words: transformers for image recognition at scale. arXiv(2020).

    [36] D. Ulyanov, A. Vedaldi, V. Lempitsky. Deep image prior. IEEE Conference on Computer Vision and Pattern Recognition, 9446-9454(2018).

    [37] T.-Y. Lin, M. Maire, S. Belongie. Microsoft COCO: common objects in context. European Conference on Computer Vision, 740-755(2014).

    [38] L. Liu, H. Jiang, P. He. On the variance of the adaptive learning rate and beyond. 8th International Conference on Learning Representations (ICLR), 1-13(2020).

    [39] M. Eitz, J. Hays, M. Alexa. How do humans sketch objects?. ACM Trans. Graph., 31, 44(2012).

    [40] C. Qiao, D. Li, Y. Guo. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods, 18, 194-202(2021).

    [41] A. Makandar, D. Mulimani, M. Jevoor. Comparative study of different noise models and effective filtering techniques. Int. J. Sci. Res., 3, 458-464(2013).

    [42] M. Tsang, R. Nair, X.-M. Lu. Quantum theory of superresolution for two incoherent optical point sources. Phys. Rev. X, 6, 031033(2016).

    [43] K. Y. Han, K. I. Willig, E. Rittweger. Three-dimensional stimulated emission depletion microscopy of nitrogen-vacancy centers in diamond using continuous-wave light. Nano Lett., 9, 3323-3329(2009).

    [44] X.-D. Chen, E.-H. Wang, L.-K. Shan. Focusing the electromagnetic field to 10−6λ for ultra-high enhancement of field-matter interaction. Nat. Commun., 12, 6389(2021).

    [45] C. L. Degen, F. Reinhard, P. Cappellaro. Quantum sensing. Rev. Mod. Phys., 89, 035002(2017).

    [46] Z.-H. Wang. PiMAE(2022).

    [47] Y. Zhang, D. Zhou, S. Chen. Single-image crowd counting via multi-column convolutional neural network. IEEE Conference on Computer Vision and Pattern Recognition, 589-597(2016).

    [48] T. Xiao, P. Dollar, M. Singh. Early convolutions help transformers see better. arXiv(2021).

    [49] H. Zhao, O. Gallo, I. Frosio. Loss functions for image restoration with neural networks. IEEE Trans. Image Process., 3, 47-57(2016).

    [50] B. Zhang, J. Zerubia, J.-C. Olivo-Marin. Gaussian approximations of fluorescence microscope point-spread function models. Appl. Opt., 46, 1819-1829(2007).

    [51] M. W. Beijersbergen, L. Allen, H. Van der Veen. Astigmatic laser mode converters and transfer of orbital angular momentum. Opt. Commun., 96, 123-132(1993).

    [52] Z. Wang, E. P. Simoncelli, A. C. Bovik. Multiscale structural similarity for image quality assessment. 37th Asilomar Conference on Signals, Systems & Computers, 2, 1398-1402(2003).

    [53] https://www.mathworks.com/products/matlab.html. https://www.mathworks.com/products/matlab.html

    [54] O. Ronneberger, P. Fischer, T. Brox. U-NET: convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention, 234-241(2015).

    [55] S. K. Gaire, E. Flowerday, J. Frederick. Deep learning-based spectroscopic single-molecule localization microscopy for simultaneous multicolor imaging. Computational Optical Sensing and Imaging, CTu5F-4(2022).

    [56] T. J. Collins. Imagej for microscopy. Biotechniques, 43, S25-S30(2007).

    [57] M. Ovesný, P. Křžek, J. Borkovec. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics, 30, 2389-2390(2014).

    Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, Fang-Wen Sun, "Learning the imaging mechanism directly from optical microscopy observations," Photonics Res. 12, 7 (2024)
    Download Citation