
- Advanced Photonics Nexus
- Vol. 4, Issue 4, 046001 (2025)
Abstract
Keywords
1 Introduction
Maximizing information extraction is the fundamental concept of computational imaging technology. Numerous methods have been introduced to achieve this vision, which utilizes information inaccessible through traditional methods, further enhancing imaging dimensions,1 speed,2 accuracy,3 applicable scenarios,4,5 or reducing imaging costs.6 Many computational imaging methods, when initially proposed, were often limited to using bright-field images for reconstruction. Recent efforts have concentrated on optimizing physical models by incorporating dark-field images to obtain abundant information and achieve improved imaging resolution.7,8
As a native computational microscopy approach capable of efficiently utilizing both bright-field and dark-field images, Fourier ptychographic microscopy (FPM) provides high resolution9,10 and large field of view (FOV)11 simultaneously by capturing angle diversity and expanding information within the Fourier space. By integrating its quantitative phase imaging capabilities,12,13 FPM demonstrates considerable potential for label-free imaging applications. With ongoing advancements, FPM gradually increases the acquisition speed13
FPM effectively extracts high-frequency information from images captured under high-angle illumination. However, acquiring this information is challenging because it often resides in the low-exposure dark-field regions,37 where the signal-to-noise ratio (SNR) deteriorates as the illumination angle increases. This degradation limits the achievable illumination range and reduces the available information, ultimately compromising reconstruction quality.38 As a result, expanding the illumination array beyond a certain limit provides diminishing returns, making it unnecessary to use an excessively large illumination array in experiments. Therefore, obtaining high-quality dark-field information is essential for achieving higher-resolution FPM reconstructions.
Sign up for Advanced Photonics Nexus TOC. Get the latest issue of Advanced Photonics Nexus delivered right to you!Sign up now
To address these challenges, high dynamic range (HDR) techniques have been widely adopted since the introduction of FPM, enabling enhanced processing of both bright-field and dark-field data by suppressing saturation errors, reducing noise, and achieving a more uniform background.11,39 However, determining the optimal exposure time for dark-field images remains a persistent challenge. Underexposed data lack sufficient high-frequency information, whereas overexposed data can introduce model mismatches that degrade reconstruction accuracy. Despite these improvements, HDR methods inherently rely on linear combinations of multi-exposure data, which oversimplify the intensity variations, particularly in dark-field regions. This often results in intensity inversion artifacts and loss of critical details in the reconstructed images, especially when handling overexposed data. Moreover, the effectiveness of HDR is highly dependent on fine-tuning exposure parameters for different samples and systems, increasing experimental complexity and reducing reproducibility.
In response, several alternative methods have been proposed, such as acquiring bright-field and dark-field images separately with varying exposure times, a strategy commonly employed with 8-bit cameras to extend the dynamic range. However, these approaches often rely on basic linear combinations or direct truncation of image information,40,41 which fail to fully utilize the complex information embedded in multi-exposure data. Although these methods can improve image quality to a certain extent, they lack the ability to efficiently capture and integrate high-frequency details, especially in dark-field regions. Furthermore, their simplistic operations may result in performance that is even inferior to HDR in certain scenarios, particularly in applications where high resolution and phase fidelity are critical. Consequently, these approaches are generally suitable only in scenarios where maximizing resolution is not the primary objective.
Despite the current limitations in utilizing dark-field data, it still provides valuable information and serves critical functions, particularly in complex scenarios such as biological specimens with intricate structural details. In these cases, the importance of dark-field information becomes even more pronounced. However, traditional methodologies face significant constraints in efficiently processing multiple sets of data, especially overexposed images, leading to suboptimal utilization of available information. This highlights the urgent need for innovative algorithms that can effectively leverage dark-field data to enhance reconstruction quality.
A promising solution involves incorporating image fusion techniques, particularly those designed to process dark-field images. Recent advancements in this field, especially the emergence of deep learning-based methods such as convolutional neural networks (CNNs), have enabled more efficient and versatile image fusion. These methods facilitate the seamless integration of information from multiple exposures, including dark-field regions, thereby enhancing the overall reconstruction performance.42,43 The synergy between image fusion techniques and computational microscopy underscores the considerable potential of these approaches in improving FPM reconstruction.
In this work, we propose a multi-exposure image fusion (MEIF) preprocessing framework based on fully connected CNNs and integrate it into the FPM pipeline. This approach effectively introduces CNNs into the FPM algorithm as a preprocessing stage, enabling efficient fusion of images captured at different exposure times and enhancing the information available for reconstruction. We validated the effectiveness of this framework using both traditional FPM and feature-domain FPM reconstruction algorithms, demonstrating notable improvements in intensity and phase information.
The MEIF framework excels in integrating multi-exposure data, particularly enhancing dark-field images and enriching the overall information for optimal reconstruction. Compared with conventional HDR methods based on linear truncation and combination, MEIF exhibits superior performance in extracting and utilizing dark-field information, especially from overexposed images. In addition, by leveraging feature-domain techniques and CNNs, MEIF extracts more complex and meaningful information, producing sharper images, richer textures, clearer details, and more accurate phase reconstructions. Quantitative analysis further highlights the superiority and versatility of MEIF over traditional HDR methods.
A notable advantage of MEIF is its exceptional generalization capability. Unlike many CNNs used in FPM, MEIF does not require additional training as its models are exclusively trained on publicly available datasets unrelated to microscopy imaging. This ensures robust generalization across diverse systems and samples, making it compatible with various microscopy imaging devices without retraining. This high level of generalization not only enhances reconstruction outcomes but also simplifies the complex preprocessing workflows typically required in HDR-based approaches.
We present the structure and function of MEIF and FPM reconstruction algorithms, and outline the complete MEIF-FPM pipeline in Sec. 2. In Sec. 3, we demonstrate the significant effectiveness of MEIF. Section 4 discusses the robust applicability of MEIF and compares it with HDR. Finally, in Sec. 5, we summarize the effects and functions of MEIF and discuss broader applications of image fusion.
2 Methods
2.1 Multi-exposure Image Fusion
Inspired by a general image fusion framework based on CNNs,44 we presented a universal CNN-based image fusion model capable of handling various types and quantities of input images. This network is capable of simultaneously processing multiple sets of data with different exposure times as input and generating a fused image with multiple exposures as output. The network comprises three modules: a feature extraction module, a feature fusion module, and a feature reconstruction module, as illustrated in Fig. 1.
Figure 1.MEIF network framework based on CNNs. During each processing step, images with different exposure times under the same illumination angle are input for fusion. The iteration iterates through all illumination angles to process all original images. The input accepts three or more grayscale images (
In the feature extraction module, we use two convolutional layers to extract low-level features from input images. The first layer (CONV 1) is from a pretrained ResNet 101 on ImageNet, with 64 convolutional kernels of size
The feature fusion module employs an element-wise fusion rule to fuse convolutional features from multiple inputs. To maximize information content in the fused multi-exposure image, we choose the element-wise maximum.
In the feature reconstruction module, two additional convolutional layers (CONV 3 and CONV 4) reconstruct the fused image from the convolutional features. CONV 3 fine-tunes the features with parameters matching CONV 2, and CONV 4 reconstructs the feature maps into an output using element-wise weighted averaging with a kernel size and number of
Embedding the MEIF network into the FPM imaging pipeline finalizes the comprehensive imaging process illustrated in Fig. 2(a). This process incorporates MEIF as a crucial component to maximize information content, resulting in improved reconstruction performance. For more detailed information on the structure and parameters, please refer to Supplement 1 in the Supplementary Material.
Figure 2.MEIF-FPM full pipeline. (a) Overview of the entire pipeline, illustrating the process of capturing multi-exposure images, grouping them based on illumination angles, performing MEIF on sets of multi-exposure images with the same illumination angle, and finally obtaining the MEIF results for FPM reconstruction. (b) Model of raw data acquisition, where samples are illuminated at different angles by an LED array, and the imaging system collects multiple intensity images. For MEIF-FPM, multi-exposure image acquisition is crucial for MEIF. (c) Traditional FPM reconstruction approach incorporates modulus constraints and support constraints, and conducts Fourier space updates iteratively. (d) The reconstruction strategy of FD-FPM involves iteratively recovering information extracted after feature extraction, resembling the principles outlined in panel (c), for intensity and phase recovery during the iteration process. This iterative process comprises six steps, indicated as (i) to (vi) along the way.
2.2 Traditional FPM Reconstruction
FPM acquires images of samples illuminated at different angles provided by sequentially illuminating an LED array. Under the conditions of thin samples and the plane wave approximation, its imaging model can be described using Fourier optics.22 In this model, an image illuminated by an oblique plane wave with a wavevector
The camera exclusively captures low-resolution intensity measurements, whereas the phase and super-resolution outcomes are attained through reconstruction. Traditional reconstruction algorithms are typically carried out using a combination of an alternating projection algorithm45 and an embedded pupil function recovery algorithm.11,22 During the reconstruction process shown in Fig. 2(c), the complex transmittance of the sample is updated in the Fourier domain, where the measured intensity data in the spatial domain serves as modulus constraints, and the finite size of the pupil aperture serves as support constraints. It iteratively updates each subregion of the complex spectrum
2.3 Feature-Domain FPM Reconstruction
The recently proposed FD-FPM algorithm34,35 shares the same forward physical model and synthetic aperture concept as traditional FPM but markedly varies in the reconstruction procedure. The distinction lies in its incorporation of feature-domain information from images. By enhancing the forward model and fine-tuning the loss function, the framework enables a more nuanced utilization of data, leading to enhanced image reconstruction outcomes. As depicted in Fig. 2(d), the iterative process of FD-FPM has transitioned from the conventional FPM to a six-step iterative approach, leveraging feature extraction and optimization techniques.
- 1.The model generates a series of predicted images based on the current estimates of sample complex amplitude and pupil function parameters.
- 2.The feature extractor filters the predicted images and their corresponding observed images to generate feature mappings.
- 3.Compute the feature domain error between the model predictions and observed values.
- 4.Backpropagate the error to obtain the complex gradient.
- 5.The complex gradient is managed by an optimizer with the potential first and second-order moments.
- 6.Update model parameters.
In essence, FD-FPM minimizes the
3 Results
3.1 Multi-exposure Image Fusion of Raw Data
The optimal performance of MEIF is achieved by merging five different exposures in this study. We designate the normal exposure as exposure value 0 (EV 0), with images acquired at EV −1, EV 0, EV +1, EV +2, and EV +4, which are subsequently fused. As different samples vary in thickness and color, their normal exposure times differ. To establish a universally applicable exposure metric across different samples, we defined this relative exposure parameter, where the normal exposure is determined through automatic exposure. This approach ensures consistent performance across all tested samples. The specific exposure times are provided in Table S2 in the Supplementary Material.
When focusing on the raw data for EV0, as depicted in Fig. 3(a), the image is distinctly divided into bright and dark fields, outlined by the numerical aperture (NA). In addition, due to restrictions imposed by NA and the pupil, certain images exhibit nearly circular regions of semi-brightness and semi-darkness. Clearly, the information in the dark field is relatively weak, indicating a lack of information in this region. In Fig. 3(b), MEIF demonstrates a significant enhancement in the dark field of the raw data. Figures 3(c1)–3(c3) show the differences between the normally exposed images and those processed by MEIF. We have selected three distinct positions from the center to the periphery for illustration. It is evident that the enhancement of information in the dark field is substantial: at the same locations, single-exposure images appear almost entirely black with scarce details, whereas MEIF reveals the high-frequency details at the image edges much more clearly. Moreover, the bright-field images remain unaffected by overexposure artifacts due to the robustness of the network, confirming that our simple exposure strategy is both effective and safe.
Figure 3.Comparison between raw data from normal exposure and MEIF results. (a) Stitched image of raw data from normal exposure based on illumination angles. (b) Stitched image of MEIF results based on illumination angles. (c1)–(c3) Comparison of representative illumination angles between normal exposure (left) and MEIF images (right), with relative positions marked by colored rectangles in panels (a) and (b).
The lack of dark field information in the normal exposure data poses a considerable challenge for subsequent recovery. After MEIF, both the bright-field and dark-field images undergo certain changes. As mentioned earlier, the new bright-field images exhibit only adjustments in brightness and contrast, free from interference by overexposed artifacts, whereas the information in the dark field is substantially amplified and extracted. The improvement in information is very apparent, especially when the original data are amplified to the exposure level of MEIF-during, where its inherent noise and data incompleteness become significantly more pronounced (see Fig. S5 in the Supplementary Material). Therefore, the reconstruction results obtained with MEIF demonstrate significant enhancements in both intensity and phase compared with the original data, as further illustrated by the USAF resolution calculator and biological sample analyses.
3.2 USAF Resolution Target
The USAF resolution target, a widely used sample, plays a crucial role in demonstrating the quantitative effectiveness of the algorithm. The raw data presented in Fig. 3 were collected from a
Figure 4(a1) presents the slicing-free recovery results obtained using FD-FPM and MEIF. Focusing on the region highlighted by the orange box, Fig. 4(a2) provides a zoomed-in view. The HDR method, however, fails to effectively utilize the intensity variations introduced by overexposure, resulting in detrimental effects on reconstruction. As shown in Fig. 4(a3), HDR suffers from intensity inversion due to its inability to properly handle overexposed data, leading to misalignment and visible artifacts, which are evident in Fig. 4(a4). By contrast, MEIF preserves the integrity of the reconstruction by avoiding the intensity inversion caused by overexposure, as further demonstrated by additional analysis in Supplement 1 in the Supplementary Material.
Figure 4.Reconstruction results of the USAF target. (a1) Whole slide imaging (WSI) reconstruction with MEIF; (a2) zoomed-in view of the MEIF reconstruction; (a3) zoomed-in view with HDR; (a4) quantitative distribution corresponding to the lines in (a2) and (a3). (b1)–(b4) Phase reconstruction results with MEIF algorithm, magnified views, and the quantitative distribution along the indicated lines. (c1)–(c4) Phase reconstruction results with HDR algorithm, magnified views, and the quantitative distribution along the indicated lines.
Although the intensity reconstruction results of MEIF may be slightly inferior to traditional HDR methods, with a minor difference in the visible groups in the central region, MEIF ensures the continuity and accuracy of the reconstruction. By preventing intensity inversion, MEIF maintains spectral continuity and accuracy, which is more critical than merely improving intensity resolution. Furthermore, by effectively utilizing the high-frequency information embedded in overexposed data, MEIF significantly enhances phase information.
The improvement in phase reconstruction is even more pronounced. MEIF provides a substantial enhancement in phase resolution, as demonstrated in the comparisons between Figs. 4(b1), 4(b2) and 4(c1), 4(c2), where the phase contrast is noticeably improved. This confirms that the MEIF algorithm more effectively leverages the information provided by high-angle illumination.
Comparisons between Figs. 4(b3), 4(b4) and 4(c3), 4(c4) further illustrate that the phase recovery achieved by MEIF is much clearer, delivering unparalleled phase details, particularly in more complex samples. This advantage becomes even more apparent in subsequent tests conducted with biological specimens.
3.3 Biological Tissues
The ultimate goal of computational microscopy is to enhance performance in biomedical research, delivering practical benefits for real-world applications. Consequently, a simple USAF resolution target is insufficient for comprehensive validation. To demonstrate the advantages of MEIF in realistic scenarios, we evaluated its performance using biological samples. Specifically, we used the onion epidermis as the test sample and compared the intensity and phase results obtained by MEIF with those from the traditional HDR method. The raw data were acquired using a low-magnification objective (4×, 0.1 NA, Nikon, Tokyo, Japan), and the detailed experimental parameters are provided in Table S2 in the Supplementary Material.
Figure 5(a) illustrates the slicing-free WSI reconstruction results of the onion epidermis with different preprocessing methods. Figure 5(b) presents the corresponding phase reconstruction of the same region. It is evident that the MEIF method provides the clearest details and contrast, both in intensity and phase images. To facilitate a closer examination, we carefully selected three information-rich regions of interest (ROIs) for comparison. By comparing the reconstructed results with the ground truth obtained using a high-magnification objective, we observed that the single HDR algorithm offers more detailed results than single-exposure reconstructions. However, MEIF produced significantly clearer intensity results with higher contrast, revealing more distinct shapes of cell nuclei and certain cellular textures, which closely resemble the ground truth. This demonstrates the efficiency and accuracy of the MEIF algorithm.
Figure 5.Reconstruction results of the onion epidermis. (a) WSI intensity reconstruction; (b) WSI phase reconstruction; (c1, d1, e1) Ground truth for ROIs 1 to 3; (c2 to c7, d2 to d7, e2 to e7) Amplitude and phase reconstruction results for ROIs 1 to 3; (e8 to e14) Quantitative distribution for the line-scan regions in ROI 3 (e1 to e7), where the horizontal coordinate is 0 to
To quantitatively validate the superiority of MEIF, we isolated a specific texture within ROI 3d for detailed comparison. Notably, only the MEIF algorithm was able to reveal the clear undulations in this region, demonstrating its ability to capture subtle biological textures, as shown in Figs. 5(e10) and 5(e3). Such fine details remain undetectable using both the HDR and single-exposure algorithms [see Figs. 5(e2), 5(e8), 5(e4), 5(e10)]. Furthermore, to confirm the effectiveness of these enhanced details, we compared the reconstructed results with images acquired using a high-magnification objective (20×, 0.4 NA), serving as ground truth. As evident from the quantitative comparison figures [Figs. 5(e8)–5(e11)], only the MEIF reconstruction successfully recovered the intricate details captured by the 20× objective, with even more pronounced contrast in some cases. This is attributed to the preservation of the 4× objective’s larger depth of field in MEIF reconstructions, which is significantly greater than that of the 20× objective. This inherent advantage of FPM imaging facilitates a more comprehensive and accurate observation of biological samples.
Phase reconstruction of biological samples is a critical aspect of our study, and MEIF demonstrates superior performance in this domain. Using the same color bar to compare phase images obtained from the three methods, it is evident that MEIF consistently delivers sharper details, both in the full image and the zoomed-in ROIs. Notably, MEIF distinctly captures the central cell nucleus and surrounding structures in ROI 2, as well as the longitudinal textures in ROI 3. By contrast, the HDR and single-exposure methods produce blurred and indistinct features, which are effectively mitigated by MEIF. These results underscore the substantial improvements that MEIF offers in phase recovery, enabling the accurate preservation of intricate biological structures.
4 Discussion
4.1 Multi-exposure Image Fusion and Other Neural Network-Assisted Algorithms
Neural networks have become widely used in computational microscopy; however, they face inherent limitations that hinder broad generalization. The two primary challenges are their lack of robust generalizability46,47 and their inability to efficiently process a large number of input images simultaneously.46
MEIF addresses these challenges by focusing on the processing stages rather than the reconstruction stage, thereby circumventing the need to input large datasets into neural networks. Many other studies have tackled this issue through alternative imaging algorithms or by entirely bypassing the reconstruction stage.48,49 Moreover, this approach eliminates the reliance on specific post-processing steps that often require sample-specific training, a practice that tends to cause overfitting to similar or even identical samples, ultimately limiting generalizability to diverse sample types.49
The use of two convolutional layers as feature extractors—one of the most versatile and widely adopted image fusion techniques44—combined with carefully selected processing stages, provides an effective and innovative approach for image fusion in computational microscopy. MEIF achieves remarkable generalization without requiring training on any microscopic imaging data, ensuring unbiased performance across diverse sample types.
As demonstrated earlier, MEIF produced outstanding results with both biological samples and the USAF resolution target. To further validate its robustness, Fig. 6 presents a more challenging scenario involving a
Figure 6.FD-FPM reconstruction results of animal connective tissue: (a1)–(a4) Stitching-free reconstruction results after MEIF processing, where (a1) and (a2) represent the whole block recovery of intensity and phase results, respectively. (a3), (a4) Zoomed-in results of the ROIs, which are circled in the images. Similarly, (b1), (b2), (b3), and (b4) are the stitching-free recovery results for intensity and phase, along with zoomed-in ROI results. (c) The results directly captured by a higher-resolution objective (20×/0.75 NA). The reconstruction data are acquired using a lower-resolution (4×/0.1 NA, Nikon) objective.
For further comparison, we applied conventional FPM reconstruction to the same dataset, and the results (Fig. S6 in the Supplementary Material) show that MEIF consistently outperforms traditional methods, reaffirming its strong adaptability and generalization capability.
By comparing Figs. 6(a3), 6(b3), and 6(c), it is evident that Fig. 6(a3) exhibits the most detailed and crispest edges. This once again confirms the reliability of MEIF and its ability to significantly enhance information content. The phase results in Figs. 6(a4) and 6(b4) further support this observation. The subdued phase in Figs. 6(a2) and 6(a4) stands in stark contrast to Figs. 6(b2) and 6(b4), where details and textures within the tissue are clearly visible in the phase obtained through MEIF. Similarly, Figs. 6(a1), 6(a3), 6(b1), and 6(b3) highlight the edges of the sample, showcasing a nonstitched FOV achieved through the combination of MEIF and FD-FPM. This combination provides a sharper intensity and richer phase, representing significant breakthroughs not observed in other algorithms.
4.2 Validity and Effectiveness
The reconstruction in this paper is performed by merging the raw data from five different exposures: EV −1, EV 0, EV +1, EV +2, and EV +4. Experimental results have demonstrated the effectiveness of this exposure combination. However, it is essential to adjust the exposure values based on the sample. At a minimum, we need to ensure the availability of a set of normally exposed photos, slightly overexposed photos, and photos with a higher degree of overexposure for merging. This ensures strong continuity in the data after maximum element fusion and excellent dark-field signal. In other words, all operations must align with our original intention of enhancing dark-field signals, ensuring signal richness while maintaining signal continuity to the greatest extent possible. This is a guarantee for achieving the maximization of information content in our algorithm. Of course, the quantitative discussion of exposure selection can be challenging, and the specifics should be based on the experimental environment and results.
4.3 Limitations and Challenges
In the Sec. 3, we highlighted the advantages of MEIF as a preprocessing module, including its improvements in image intensity, phase recovery, and robust generalizability. However, these benefits come at a cost. Compared with conventional methods, MEIF requires acquiring more data and longer exposure times, which significantly reduces the temporal resolution of the FPM system. Specifically, although a single acquisition set can be completed in
Moreover, although MEIF demonstrates substantial improvements in intensity for biological samples, its advantages are less pronounced for simpler targets such as the USAF resolution chart. Although MEIF effectively mitigates issues such as inversion and misalignment (as shown in Fig. S4 in the Supplementary Material), it offers limited enhancement in the uniformity of low-frequency information and does not extend the upper limit of high-frequency resolution. This suggests that MEIF may still be less effective for samples dominated by low-frequency information, where traditional methods can achieve comparable results.
5 Conclusion and Outlook
Experimental results have shown that both traditional FPM (see Supplement 1 in the Supplementary Material) and FD-FPM methods are noticeably influenced by MEIF. MEIF significantly enhances both the intensity and phase information in the reconstructed images.
At the same time, MEIF also has some limitations and challenges, such as longer image acquisition times and less-than-ideal imaging performance for certain simple samples. We have observed that the effectiveness of MEIF lies not only in the efficient utilization of information but also in its capability to combine that information into a suitable imaging model. Consequently, future work on single-frame preprocessing and brightness allocation correction is highly anticipated as these improvements may help achieve better reconstruction results without increasing the acquisition time. In addition, investigating and addressing the fundamental causes of these imaging shortcomings represent a promising direction for further research.
However, the achievements of MEIF represent only an initial step. More importantly, it introduces, for the first time in our observation, a highly generalizable CNN-based image fusion module integrated into computational microscopy, thereby extending the imaging pipeline and increasing overall data throughput. We anticipate breakthroughs in related microscopy fields in the future.
In the domain of imaging fusion, particularly with MEIF, specialized models may emerge. One promising direction is to develop a multi-exposure fusion framework tailored to specific samples, potentially enhancing both speed and resolution. Another avenue is to explore image fusion across different dimensions, such as extending depth of field by combining multiple focal planes, which may alleviate current limitations in depth-of-field extension. In addition, the potential of similar approaches in ptychography applicable to microscopy, crystal diffraction imaging, and remote sensing further underscores the opportunity for analogous methods. Given these prospects, although our current focus is on improving FPM imaging quality with MEIF, we believe that this universal method could pave the way for maximizing information content in future research.
Zhiping Wang is currently pursuing an MS degree in physics of life at the Biozentrum, University of Basel, Switzerland. He received his bachelor’s degree in physics from Lanzhou University, China, in 2024. His current research focuses on computational imaging and biophysics.
Tianci Feng is a PhD student in optics at the Xi’an Institute of Optics and Precision Mechanics (XIOPM), Chinese Academy of Sciences (CAS), China. He received his bachelor’s degree in mechanical engineering, Sichuan Agricultural University, China, in 2021. His current research focuses on Fourier ptychographic microscopy.
Aiye Wang is a PhD student in optics at the XIOPM, CAS, China. He received his bachelor’s degree in communication engineering, Soochow University, China, in 2020. His current research focuses on Fourier ptychographic microscopy.
Jinghao Xu is a PhD student in optics at the XIOPM, CAS, China. He received his bachelor’s degree in mechanical engineering and automation from Xi’an Jiaotong University, China, in 2021. His current research focuses on Fourier ptychographic microscopy.
An Pan is an associate professor and a principal investigator at the XIOPM, CAS, China, and the head of the Pioneering Interdiscipline Center of the State Key Laboratory of Transient Optics and Photonics. He received his bachelor’s degree in electronic science and technology from Nanjing University of Science and Technology (NJUST), China, in 2014, and he obtained his PhD in optical engineering at the XIOPM, CAS, China, in 2020. He was a visiting graduate at the Bar-Ilan University, Israel, in 2016 and at the California Institute of Technology (Caltech), United States, from 2018 to 2019, respectively. His current research focuses on the computational optical imaging and biophotonics and is among the first to work on Fourier ptychography. He was selected as the 2024 Optica Ambassador and was the winner of the 2021 Forbes China 30 Under 30 List, 2021 Excellent Doctoral Dissertation of CAS, 2020 Special President Award of CAS, 2019 OSA Boris P. Stoicheff Memorial Scholarship, the 1st Place Poster Award of the 69th Lindau Nobel Laureate Meetings in Germany (Lindau Scholar), and 2017 SPIE Optics and Photonics Education Scholarship. He has published 40 peer-reviewed journal papers and is a referee for more than 40 peer-reviewed journals. He is an early career member of Optica and SPIE.
References
[10] Z. F. Phillips, R. Eckert, L. Waller. Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography, IW4E–5(2017).
[31] A. Pan et al. In situ correction of liquid meniscus in cell culture imaging system based on parallel Fourier ptychographic microscopy (96 eyes)(2019).
[41] E. Reinhard et al. High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting(2010).

Set citation alerts for the article
Please enter your email address