• Photonics Research
  • Vol. 12, Issue 1, 7 (2024)
Ze-Hao Wang1、2、†, Long-Kun Shan1、2、†, Tong-Tian Weng1、2, Tian-Long Chen3, Xiang-Dong Chen1、2、4, Zhang-Yang Wang3, Guang-Can Guo1、2、4, and Fang-Wen Sun1、2、4、*
Author Affiliations
  • 1CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
  • 2CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
  • 3University of Texas at Austin, Austin, Texas 78705, USA
  • 4Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
  • show less
    DOI: 10.1364/PRJ.488310 Cite this Article Set citation alerts
    Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, Fang-Wen Sun. Learning the imaging mechanism directly from optical microscopy observations[J]. Photonics Research, 2024, 12(1): 7 Copy Citation Text show less
    References

    [1] S.-H. Lee, J. Y. Shin, A. Lee. Counting single photoactivatable fluorescent molecules by photoactivated localization microscopy (PALM). Proc. Natl. Acad. Sci. USA, 109, 17436-17441(2012).

    [2] M. J. Rust, M. Bates, X. Zhuang. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods, 3, 793-796(2006).

    [3] K. K. Bearne, Y. Zhou, B. Braverman. Confocal super-resolution microscopy based on a spatial mode sorter. Opt. Express, 29, 11784-11792(2021).

    [4] S. W. Hell, J. Wichmann. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett., 19, 780-782(1994).

    [5] X. Chen, C. Zou, Z. Gong. Subdiffraction optical manipulation of the charge state of nitrogen vacancy center in diamond. Light Sci. Appl., 4, e230(2015).

    [6] E. Nehme, L. E. Weiss, T. Michaeli. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica, 5, 458-464(2018).

    [7] A. Speiser, L.-R. Müller, P. Hoess. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods, 18, 1082-1090(2021).

    [8] D. S. Biggs, M. Andrews. Acceleration of iterative image restoration algorithms. Appl. Opt., 36, 1766-1775(1997).

    [9] T. F. Chan, C.-K. Wong. Total variation blind deconvolution. IEEE Trans. Med. Imaging, 7, 370-375(1998).

    [10] D. Krishnan, T. Tay, R. Fergus. Blind deconvolution using a normalized sparsity measure. Conference on Computer Vision and Pattern Recognition (CVPR), 233-240(2011).

    [11] G. Liu, S. Chang, Y. Ma. Blind image deblurring using spectral properties of convolution operators. IEEE Trans. Med. Imaging, 23, 5047-5056(2014).

    [12] T. Michaeli, M. Irani. Blind deblurring using internal patch recurrence. European Conference on Computer Vision, 783-798(2014).

    [13] J. Pan, D. Sun, H. Pfister. Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell., 40, 2315-2328(2017).

    [14] J. Pan, Z. Hu, Z. Su. L0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 39, 342-355(2016).

    [15] W. Ren, X. Cao, J. Pan. Image deblurring via enhanced low-rank prior. IEEE Trans. Med. Imaging, 25, 3426-3437(2016).

    [16] L. Sun, S. Cho, J. Wang. Edge-based blur kernel estimation using patch priors. IEEE International Conference on Computational Photography (ICCP), 1-8(2013).

    [17] Y. Yan, W. Ren, Y. Guo. Image deblurring via extreme channels prior. IEEE Conference on Computer Vision and Pattern Recognition, 4003-4011(2017).

    [18] W. Zuo, D. Ren, D. Zhang. Learning iteration-wise generalized shrinkage–thresholding operators for blind deconvolution. IEEE Trans. Med. Imaging, 25, 1751-1764(2016).

    [19] A. Shajkofci, M. Liebling. Spatially-variant CNN-based point spread function estimation for blind deconvolution and depth estimation in optical microscopy. IEEE Trans. Med. Imaging, 29, 5848-5861(2020).

    [20] L. B. Lucy. An iterative technique for the rectification of observed distributions. Astrophys. J., 79, 745(1974).

    [21] A. van den Oord, Y. Li, O. Vinyals. Representation learning with contrastive predictive coding. arXiv(2018).

    [22] Z. Wu, Y. Xiong, S. X. Yu. Unsupervised feature learning via non-parametric instance discrimination. IEEE Conference on Computer Vision and Pattern Recognition, 3733-3742(2018).

    [23] K. He, H. Fan, Y. Wu. Momentum contrast for unsupervised visual representation learning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729-9738(2020).

    [24] T. Chen, S. Kornblith, M. Norouzi. A simple framework for contrastive learning of visual representations. International Conference on Machine Learning (PMLR), 1597-1607(2020).

    [25] C. Doersch, A. Gupta, A. A. Efros. Unsupervised visual representation learning by context prediction. IEEE International Conference on Computer Vision, 1422-1430(2015).

    [26] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller. Discriminative unsupervised feature learning with convolutional neural networks. arXiv(2014).

    [27] J. Devlin, M.-W. Chang, K. Lee. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv(2018).

    [28] T. Chen, S. Liu, S. Chang. Adversarial robustness: from self-supervised pre-training to fine-tuning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 699-708(2020).

    [29] X. Chen, W. Chen, T. Chen. Self-PU: self boosted and calibrated positive-unlabeled training. International Conference on Machine Learning (PMLR), 1510-1519(2020).

    [30] M. Chen, A. Radford, R. Child. Generative pretraining from pixels. International Conference on Machine Learning (PMLR), 1691-1703(2020).

    [31] O. Henaff. Data-efficient image recognition with contrastive predictive coding. International Conference on Machine Learning (PMLR), 4182-4192(2020).

    [32] D. Pathak, P. Krahenbuhl, J. Donahue. Context encoders: feature learning by inpainting. IEEE Conference on Computer Vision and Pattern Recognition, 2536-2544(2016).

    [33] T. H. Trinh, M.-T. Luong, Q. V. Le. Selfie: self-supervised pretraining for image embedding. arXiv(2019).

    [34] K. He, X. Chen, S. Xie. Masked autoencoders are scalable vision learners. arXiv(2021).

    [35] A. Dosovitskiy, L. Beyer, A. Kolesnikov. An image is worth 16 × 16 words: transformers for image recognition at scale. arXiv(2020).

    [36] D. Ulyanov, A. Vedaldi, V. Lempitsky. Deep image prior. IEEE Conference on Computer Vision and Pattern Recognition, 9446-9454(2018).

    [37] T.-Y. Lin, M. Maire, S. Belongie. Microsoft COCO: common objects in context. European Conference on Computer Vision, 740-755(2014).

    [38] L. Liu, H. Jiang, P. He. On the variance of the adaptive learning rate and beyond. 8th International Conference on Learning Representations (ICLR), 1-13(2020).

    [39] M. Eitz, J. Hays, M. Alexa. How do humans sketch objects?. ACM Trans. Graph., 31, 44(2012).

    [40] C. Qiao, D. Li, Y. Guo. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods, 18, 194-202(2021).

    [41] A. Makandar, D. Mulimani, M. Jevoor. Comparative study of different noise models and effective filtering techniques. Int. J. Sci. Res., 3, 458-464(2013).

    [42] M. Tsang, R. Nair, X.-M. Lu. Quantum theory of superresolution for two incoherent optical point sources. Phys. Rev. X, 6, 031033(2016).

    [43] K. Y. Han, K. I. Willig, E. Rittweger. Three-dimensional stimulated emission depletion microscopy of nitrogen-vacancy centers in diamond using continuous-wave light. Nano Lett., 9, 3323-3329(2009).

    [44] X.-D. Chen, E.-H. Wang, L.-K. Shan. Focusing the electromagnetic field to 10−6λ for ultra-high enhancement of field-matter interaction. Nat. Commun., 12, 6389(2021).

    [45] C. L. Degen, F. Reinhard, P. Cappellaro. Quantum sensing. Rev. Mod. Phys., 89, 035002(2017).

    [46] Z.-H. Wang. PiMAE(2022).

    [47] Y. Zhang, D. Zhou, S. Chen. Single-image crowd counting via multi-column convolutional neural network. IEEE Conference on Computer Vision and Pattern Recognition, 589-597(2016).

    [48] T. Xiao, P. Dollar, M. Singh. Early convolutions help transformers see better. arXiv(2021).

    [49] H. Zhao, O. Gallo, I. Frosio. Loss functions for image restoration with neural networks. IEEE Trans. Image Process., 3, 47-57(2016).

    [50] B. Zhang, J. Zerubia, J.-C. Olivo-Marin. Gaussian approximations of fluorescence microscope point-spread function models. Appl. Opt., 46, 1819-1829(2007).

    [51] M. W. Beijersbergen, L. Allen, H. Van der Veen. Astigmatic laser mode converters and transfer of orbital angular momentum. Opt. Commun., 96, 123-132(1993).

    [52] Z. Wang, E. P. Simoncelli, A. C. Bovik. Multiscale structural similarity for image quality assessment. 37th Asilomar Conference on Signals, Systems & Computers, 2, 1398-1402(2003).

    [53] https://www.mathworks.com/products/matlab.html. https://www.mathworks.com/products/matlab.html

    [54] O. Ronneberger, P. Fischer, T. Brox. U-NET: convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention, 234-241(2015).

    [55] S. K. Gaire, E. Flowerday, J. Frederick. Deep learning-based spectroscopic single-molecule localization microscopy for simultaneous multicolor imaging. Computational Optical Sensing and Imaging, CTu5F-4(2022).

    [56] T. J. Collins. Imagej for microscopy. Biotechniques, 43, S25-S30(2007).

    [57] M. Ovesný, P. Křžek, J. Borkovec. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics, 30, 2389-2390(2014).

    Ze-Hao Wang, Long-Kun Shan, Tong-Tian Weng, Tian-Long Chen, Xiang-Dong Chen, Zhang-Yang Wang, Guang-Can Guo, Fang-Wen Sun. Learning the imaging mechanism directly from optical microscopy observations[J]. Photonics Research, 2024, 12(1): 7
    Download Citation