Deep Learning-based Optical Aberration Estimation Enables Offline Digital Adaptive Optics and Super-resolution Imaging

Ideal optical imaging relies upon the high-quality focusing of excitation light and accurate detection of the emission light from the fluorescent sample. However, both the optics in the microscope and the biological samples being investigated can introduce aberrations, thus causing degradation in resolution, loss of fluorescent photons, and deterioration of signal-to-background-ratio (SBR), etc. Moreover, microscopes with high numerical apertures (NA), especially the super-resolution microscopy, are more sensitive to aberrations, because the high-NA objectives are more susceptible to high-order aberrations. To detect and correct these optical aberrations, a large number of adaptive optics (AO) technologies have been explored in the last two decades. Conventional AO leverages specific devices, such as the Shack-Hartmann wavefront sensor to measure and correct optical aberrations, then utilized wavefront corrective devices such as spatial light modulators (SLMs) to compensate for the measured aberrations by reshaping the wavefronts. However, conventional AO complicates the optics, imaging procedures, and computation, resulting in many limitations in the actual imaging process.

 

Recently, a research team led by Dr. Li Dong from the Institute of Biophysics, Chinese Academy of Sciences, has made significant progress in adaptive optical super-resolution imaging research. They proposed an aberration-aware image super-resolution reconstruction method based on spatial-temporal encoding neural networks. This founding was published in the third issue of 2024 of the Photonics Research journal and were selected by the editor-in-chief as the On the Cover article (Chang Qiao, Haoyu Chen, Run Wang, Tao Jiang, Yuwang Wang, Dong Li. Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging[J]. Photonics Research, 2024, 12(3): 474.)

 

In optical imaging systems, the image quality as well as the aberrations is typically characterized by their point spread functions (PSF), which is implicitly encoded in any specimen patch of the microscopic image. Inspired by the understanding, we devised a space-frequency encoding network (SFE-Net), as shown in Fig. 1, which is trained to directly extract the PSF with aberrations from a partition of microscope image, and achieve high-precision and fast non-spatially consistent optical aberration estimation.

 

Fig. 1. Network Architecture of Space-frequency Encoding Network. (a-e) Network architecture of the SFE-Net (a), residual group (b), double convolutional block (c), downscale block (d), and upscale block (e).

 

The SFE-Net outperformed other performance of kernel estimation algorithm, including KernelGAN, IKC, and MANet, which were evaluated on the several simulation datasets generated with aberrated PSF constituted by different orders of Zernike polynomials. Our SFE-Net accurately generates complex aberrated PSF constituted up to 18 orders of Zernike polynomials with high-fidelity. Additionally, unlike existing indirect wavefront sensing methods that involves time-consuming iterative acquisition and optimization procedures, SFE-Net can estimate aberrations from a single frame at a timescale of ~30 millisecond. This makes it suitable for imaging long-term bioprocesses where optical aberrations vary over time and need to be measured and corrected promptly.

 

Fig. 2. Optical Aberration Estimation via SFE-Net. (a) Representative aberrated PSFs were estimated by KernelGAN, IKC, MANet, and SFE-Net from WF images of CCPs, ER, and MTs. Four groups of datasets with escalating complexity of aberration were generated, corresponding to Zernike polynomials of orders 4-6, 4-8, 4-13, and 4-18. The top and bottom rows show the input WF images and GT PSF images for reference. Scale bar, 1 μm. (b) Statistical comparisons (n=30) of KernelGAN, IKC, MANet and SFE-Net (our proposed method) were conducted in terms of Peak Signal-Noise-Ratio (PSNR) on different training and testing datasets.

 

Integrated with deconvolution algorithms, the accurate PSF estimation through SFE-Net provides a straightforward yet efficient solution for numerically compensating optical aberrations and improving spatial resolution in an unsupervised manner. To further enhance the resolution while removing the optical aberrations for biological images, we integrated the PSF priors into the deep-learning super-resolution (DLSR) neural network architecture design, and devised the spatial feature transform-guided (SFT) deep Fourier channel attention network (SFT-DFCAN). As shown in Fig. 3, we employed the principal component analysis (PCA) to project the PSF to an embedding size, and combined the PSF embedding with spatial feature maps in spatial feature transform-guided Fourier channel attention block, facilitating aberration-aware image super-resolution reconstruction.

 

Fig. 3. Network Architecture of Spatial Feature Transform-guided Deep Fourier Channel Attention Network (SFT-DFCAN). (a-c) Network architecture of the SFT-DFCAN (a), spatial feature transform-guided Fourier channel attention block (FCAB) (b), and the Fourier channel attention (FCA) layer (c).

 

Compared with the super-resolution methods without introducing aberration information such as DFCAN, the SFT-DFCAN model, benefiting from the prior knowledge of PSF and estimated aberration, has the theoretical capability to recover biological structures with higher fidelity. In this study, the effectiveness of the proposed SFE-Net and SFT-DFCAN aberration detection and correction was further verified in vivo imaging. As shown in Figure 4, a combination of defocus and coma aberrations(a) and a combination of defocus and spherical aberrations(b) were manually introduced to optics during the imaging procedure, and then the well-trained SFE-Net and SFT-DFCAN model were employed to perform digital adaptive optics and super-resolution reconstruction for time-lapse experimental WF images.

 

Fig. 4. Digital Adaptive optics and Super-resolution for Live-cell Imaging. Time-lapse WF images, estimated PSFs by SFE-Net, and corresponding SR images generated by SFE-Net-facilitated SFT-DFCAN of CCPs (a), and ER (b). During the imaging procedure, a combination of defocus and coma aberrations is manually added on CCPs (a) data, while a combination of defocus and spherical aberrations are applied on ER (b) images. The PSFs estimated by SFE-Net, along with their corresponding profiles and FWHM values, were displayed in the top right corner of SR images. Scale bar, 1 μm (a-c), and 0.2 μm (zoom-in regions of a-b).

 

The SFE-Net and SFT-DFCAN proposed in this study can break the dependence on specific hardware in traditional adaptive optics and improve the sensitivity and efficiency of wavefront correction, which can achieve high-sensitivity aberration detection and correction and improve image resolution without relying on additional hardware. In the future, this technology is expected to be applied in a variety of adaptive optical imaging systems, including microscopic imaging and astronomical observation.