• Advanced Photonics Nexus
  • Vol. 4, Issue 4, 046001 (2025)
Zhiping Wang1,2,†, Tianci Feng1,3, Aiye Wang1,3, Jinghao Xu1,3, and An Pan1,3,*
Author Affiliations
  • 1Chinese Academy of Sciences, Xi’an Institute of Optics and Precision Mechanics, State Key Laboratory of Transient Optics and Photonics, Xi’an, China
  • 2Lanzhou University, School of Physical Science and Technology, Lanzhou, China
  • 3University of Chinese Academy of Sciences, Beijing, China
  • show less
    DOI: 10.1117/1.APN.4.4.046001 Cite this Article Set citation alerts
    Zhiping Wang, Tianci Feng, Aiye Wang, Jinghao Xu, An Pan, "Fusion-based enhancement of multi-exposure Fourier ptychographic microscopy," Adv. Photon. Nexus 4, 046001 (2025) Copy Citation Text show less

    Abstract

    Fourier ptychographic microscopy (FPM) is an innovative computational microscopy approach that enables high-throughput imaging with high resolution, wide field of view, and quantitative phase imaging (QPI) by simultaneously capturing bright-field and dark-field images. However, effectively utilizing dark-field intensity images, including both normally exposed and overexposed data, which contain valuable high-angle illumination information, remains a complex challenge. Successfully extracting and applying this information could significantly enhance phase reconstruction, benefiting processes such as virtual staining and QPI imaging. To address this, we introduce a multi-exposure image fusion (MEIF) framework that optimizes dark-field information by incorporating it into the FPM preprocessing workflow. MEIF increases the data available for reconstruction without requiring changes to the optical setup. We evaluate the framework using both feature-domain and traditional FPM, demonstrating that it achieves substantial improvements in intensity resolution and phase information for biological samples that exceed the performance of conventional high dynamic range (HDR) methods. This image preprocessing-based information-maximization strategy fully leverages existing datasets and offers promising potential to drive advancements in fields such as microscopy, remote sensing, and crystallography.

    1 Introduction

    Maximizing information extraction is the fundamental concept of computational imaging technology. Numerous methods have been introduced to achieve this vision, which utilizes information inaccessible through traditional methods, further enhancing imaging dimensions,1 speed,2 accuracy,3 applicable scenarios,4,5 or reducing imaging costs.6 Many computational imaging methods, when initially proposed, were often limited to using bright-field images for reconstruction. Recent efforts have concentrated on optimizing physical models by incorporating dark-field images to obtain abundant information and achieve improved imaging resolution.7,8

    As a native computational microscopy approach capable of efficiently utilizing both bright-field and dark-field images, Fourier ptychographic microscopy (FPM) provides high resolution9,10 and large field of view (FOV)11 simultaneously by capturing angle diversity and expanding information within the Fourier space. By integrating its quantitative phase imaging capabilities,12,13 FPM demonstrates considerable potential for label-free imaging applications. With ongoing advancements, FPM gradually increases the acquisition speed1315 and enables high-speed imaging16,17 with the ability to correct system errors.1822 It finds widespread application in the biomedical field and other areas,23 making noteworthy contributions to various areas, including digital pathology2428 and drug screening.2932 However, due to optical hardware limitations and constraints of traditional algorithms, FPM still encounters some challenging issues. Traditional algorithms, such as EPRY-FPM, constrained by wrinkle artifacts and phase curvature,19,33 were forced to reconstruct small sliced sections and then stitch them, introducing additional stitching artifacts in the full-FOV result. Fortunately, a groundbreaking algorithm has ingeniously tackled these challenges, igniting new avenues of possibility. This pioneering approach, termed feature-domain FPM (FD-FPM),34 refines the forward model, focuses on feature-domain information, introduces advanced optimizers, and redefines the loss function. Consequently, it not only conquers the effects of nonideal factors such as vignetting but also elevates reconstruction results. FD-FPM facilitates full-FOV reconstruction without slicing and stitching,35,36 thereby avoiding the stitching artifacts that traditional algorithms struggle with. Moreover, it demonstrates outstanding reconstruction performance under scenarios of heightened noise, larger defocus distances, and more system errors.

    FPM effectively extracts high-frequency information from images captured under high-angle illumination. However, acquiring this information is challenging because it often resides in the low-exposure dark-field regions,37 where the signal-to-noise ratio (SNR) deteriorates as the illumination angle increases. This degradation limits the achievable illumination range and reduces the available information, ultimately compromising reconstruction quality.38 As a result, expanding the illumination array beyond a certain limit provides diminishing returns, making it unnecessary to use an excessively large illumination array in experiments. Therefore, obtaining high-quality dark-field information is essential for achieving higher-resolution FPM reconstructions.

    To address these challenges, high dynamic range (HDR) techniques have been widely adopted since the introduction of FPM, enabling enhanced processing of both bright-field and dark-field data by suppressing saturation errors, reducing noise, and achieving a more uniform background.11,39 However, determining the optimal exposure time for dark-field images remains a persistent challenge. Underexposed data lack sufficient high-frequency information, whereas overexposed data can introduce model mismatches that degrade reconstruction accuracy. Despite these improvements, HDR methods inherently rely on linear combinations of multi-exposure data, which oversimplify the intensity variations, particularly in dark-field regions. This often results in intensity inversion artifacts and loss of critical details in the reconstructed images, especially when handling overexposed data. Moreover, the effectiveness of HDR is highly dependent on fine-tuning exposure parameters for different samples and systems, increasing experimental complexity and reducing reproducibility.

    In response, several alternative methods have been proposed, such as acquiring bright-field and dark-field images separately with varying exposure times, a strategy commonly employed with 8-bit cameras to extend the dynamic range. However, these approaches often rely on basic linear combinations or direct truncation of image information,40,41 which fail to fully utilize the complex information embedded in multi-exposure data. Although these methods can improve image quality to a certain extent, they lack the ability to efficiently capture and integrate high-frequency details, especially in dark-field regions. Furthermore, their simplistic operations may result in performance that is even inferior to HDR in certain scenarios, particularly in applications where high resolution and phase fidelity are critical. Consequently, these approaches are generally suitable only in scenarios where maximizing resolution is not the primary objective.

    Despite the current limitations in utilizing dark-field data, it still provides valuable information and serves critical functions, particularly in complex scenarios such as biological specimens with intricate structural details. In these cases, the importance of dark-field information becomes even more pronounced. However, traditional methodologies face significant constraints in efficiently processing multiple sets of data, especially overexposed images, leading to suboptimal utilization of available information. This highlights the urgent need for innovative algorithms that can effectively leverage dark-field data to enhance reconstruction quality.

    A promising solution involves incorporating image fusion techniques, particularly those designed to process dark-field images. Recent advancements in this field, especially the emergence of deep learning-based methods such as convolutional neural networks (CNNs), have enabled more efficient and versatile image fusion. These methods facilitate the seamless integration of information from multiple exposures, including dark-field regions, thereby enhancing the overall reconstruction performance.42,43 The synergy between image fusion techniques and computational microscopy underscores the considerable potential of these approaches in improving FPM reconstruction.

    In this work, we propose a multi-exposure image fusion (MEIF) preprocessing framework based on fully connected CNNs and integrate it into the FPM pipeline. This approach effectively introduces CNNs into the FPM algorithm as a preprocessing stage, enabling efficient fusion of images captured at different exposure times and enhancing the information available for reconstruction. We validated the effectiveness of this framework using both traditional FPM and feature-domain FPM reconstruction algorithms, demonstrating notable improvements in intensity and phase information.

    The MEIF framework excels in integrating multi-exposure data, particularly enhancing dark-field images and enriching the overall information for optimal reconstruction. Compared with conventional HDR methods based on linear truncation and combination, MEIF exhibits superior performance in extracting and utilizing dark-field information, especially from overexposed images. In addition, by leveraging feature-domain techniques and CNNs, MEIF extracts more complex and meaningful information, producing sharper images, richer textures, clearer details, and more accurate phase reconstructions. Quantitative analysis further highlights the superiority and versatility of MEIF over traditional HDR methods.

    A notable advantage of MEIF is its exceptional generalization capability. Unlike many CNNs used in FPM, MEIF does not require additional training as its models are exclusively trained on publicly available datasets unrelated to microscopy imaging. This ensures robust generalization across diverse systems and samples, making it compatible with various microscopy imaging devices without retraining. This high level of generalization not only enhances reconstruction outcomes but also simplifies the complex preprocessing workflows typically required in HDR-based approaches.

    We present the structure and function of MEIF and FPM reconstruction algorithms, and outline the complete MEIF-FPM pipeline in Sec. 2. In Sec. 3, we demonstrate the significant effectiveness of MEIF. Section 4 discusses the robust applicability of MEIF and compares it with HDR. Finally, in Sec. 5, we summarize the effects and functions of MEIF and discuss broader applications of image fusion.

    2 Methods

    2.1 Multi-exposure Image Fusion

    Inspired by a general image fusion framework based on CNNs,44 we presented a universal CNN-based image fusion model capable of handling various types and quantities of input images. This network is capable of simultaneously processing multiple sets of data with different exposure times as input and generating a fused image with multiple exposures as output. The network comprises three modules: a feature extraction module, a feature fusion module, and a feature reconstruction module, as illustrated in Fig. 1.

    MEIF network framework based on CNNs. During each processing step, images with different exposure times under the same illumination angle are input for fusion. The iteration iterates through all illumination angles to process all original images. The input accepts three or more grayscale images (n×n) with multiple exposures. CONV 1 and CONV 2, each with 64 convolutional kernels, handle feature extraction and adjustment. Element-wise fusion follows, and after that, CONV 3 (64 kernels) and CONV 4 (1 kernel) contribute to image reconstruction, resulting in a single-channel grayscale output. CONV 1 is pretrained and fixed during training. Utilizing public datasets, the training avoids redundancy for the MEIF task in computational microscopy systems.

    Figure 1.MEIF network framework based on CNNs. During each processing step, images with different exposure times under the same illumination angle are input for fusion. The iteration iterates through all illumination angles to process all original images. The input accepts three or more grayscale images (n×n) with multiple exposures. CONV 1 and CONV 2, each with 64 convolutional kernels, handle feature extraction and adjustment. Element-wise fusion follows, and after that, CONV 3 (64 kernels) and CONV 4 (1 kernel) contribute to image reconstruction, resulting in a single-channel grayscale output. CONV 1 is pretrained and fixed during training. Utilizing public datasets, the training avoids redundancy for the MEIF task in computational microscopy systems.

    In the feature extraction module, we use two convolutional layers to extract low-level features from input images. The first layer (CONV 1) is from a pretrained ResNet 101 on ImageNet, with 64 convolutional kernels of size 7×7, and the parameters of CONV 1 are fixed during training. However, as features from CONV 1 are not directly suitable for image fusion, we introduce a second convolutional layer (CONV 2) to adjust these features. The kernel number and size of CONV 2 are set to 64 and 3×3 to match CONV 1.

    The feature fusion module employs an element-wise fusion rule to fuse convolutional features from multiple inputs. To maximize information content in the fused multi-exposure image, we choose the element-wise maximum.

    In the feature reconstruction module, two additional convolutional layers (CONV 3 and CONV 4) reconstruct the fused image from the convolutional features. CONV 3 fine-tunes the features with parameters matching CONV 2, and CONV 4 reconstructs the feature maps into an output using element-wise weighted averaging with a kernel size and number of 1×1 and 1, respectively. To overcome overfitting and enhance training, intermediate layers (CONV 2 and CONV 3) have ReLU activation and batch normalization.

    Embedding the MEIF network into the FPM imaging pipeline finalizes the comprehensive imaging process illustrated in Fig. 2(a). This process incorporates MEIF as a crucial component to maximize information content, resulting in improved reconstruction performance. For more detailed information on the structure and parameters, please refer to Supplement 1 in the Supplementary Material.

    MEIF-FPM full pipeline. (a) Overview of the entire pipeline, illustrating the process of capturing multi-exposure images, grouping them based on illumination angles, performing MEIF on sets of multi-exposure images with the same illumination angle, and finally obtaining the MEIF results for FPM reconstruction. (b) Model of raw data acquisition, where samples are illuminated at different angles by an LED array, and the imaging system collects multiple intensity images. For MEIF-FPM, multi-exposure image acquisition is crucial for MEIF. (c) Traditional FPM reconstruction approach incorporates modulus constraints and support constraints, and conducts Fourier space updates iteratively. (d) The reconstruction strategy of FD-FPM involves iteratively recovering information extracted after feature extraction, resembling the principles outlined in panel (c), for intensity and phase recovery during the iteration process. This iterative process comprises six steps, indicated as (i) to (vi) along the way.

    Figure 2.MEIF-FPM full pipeline. (a) Overview of the entire pipeline, illustrating the process of capturing multi-exposure images, grouping them based on illumination angles, performing MEIF on sets of multi-exposure images with the same illumination angle, and finally obtaining the MEIF results for FPM reconstruction. (b) Model of raw data acquisition, where samples are illuminated at different angles by an LED array, and the imaging system collects multiple intensity images. For MEIF-FPM, multi-exposure image acquisition is crucial for MEIF. (c) Traditional FPM reconstruction approach incorporates modulus constraints and support constraints, and conducts Fourier space updates iteratively. (d) The reconstruction strategy of FD-FPM involves iteratively recovering information extracted after feature extraction, resembling the principles outlined in panel (c), for intensity and phase recovery during the iteration process. This iterative process comprises six steps, indicated as (i) to (vi) along the way.

    2.2 Traditional FPM Reconstruction

    FPM acquires images of samples illuminated at different angles provided by sequentially illuminating an LED array. Under the conditions of thin samples and the plane wave approximation, its imaging model can be described using Fourier optics.22 In this model, an image illuminated by an oblique plane wave with a wavevector Un=(kxn,kyn) is as follows: IUn=|F1{F[s(r)exp(iUn·r)]*F[p(r)]}|2=|F1{S(uUn)*P(u)}|2,where s(r) is the exit wave from a thin sample, u=(kx,ky) is the coordinate in the spatial-frequency domain, S(u)=F{s(r)} is the Fourier spectrum of the sample, and P(u)=F{p(r)} is the pupil function of the imaging system.

    The camera exclusively captures low-resolution intensity measurements, whereas the phase and super-resolution outcomes are attained through reconstruction. Traditional reconstruction algorithms are typically carried out using a combination of an alternating projection algorithm45 and an embedded pupil function recovery algorithm.11,22 During the reconstruction process shown in Fig. 2(c), the complex transmittance of the sample is updated in the Fourier domain, where the measured intensity data in the spatial domain serves as modulus constraints, and the finite size of the pupil aperture serves as support constraints. It iteratively updates each subregion of the complex spectrum S(U) under constraints by comparing it with the computed image based on the current S estimate. Generally, the loss function is expressed as follows: LTraditional(S,P)=In|F1{S(uUn)*P(u)}|22,where In is the n’th captured low-resolution (LR) image illuminated by the corresponding LED. The iterative process of traditional algorithms involves minimizing this loss function.

    2.3 Feature-Domain FPM Reconstruction

    The recently proposed FD-FPM algorithm34,35 shares the same forward physical model and synthetic aperture concept as traditional FPM but markedly varies in the reconstruction procedure. The distinction lies in its incorporation of feature-domain information from images. By enhancing the forward model and fine-tuning the loss function, the framework enables a more nuanced utilization of data, leading to enhanced image reconstruction outcomes. As depicted in Fig. 2(d), the iterative process of FD-FPM has transitioned from the conventional FPM to a six-step iterative approach, leveraging feature extraction and optimization techniques.

    1. 1.The model generates a series of predicted images based on the current estimates of sample complex amplitude and pupil function parameters.
    2. 2.The feature extractor filters the predicted images and their corresponding observed images to generate feature mappings.
    3. 3.Compute the feature domain error between the model predictions and observed values.
    4. 4.Backpropagate the error to obtain the complex gradient.
    5. 5.The complex gradient is managed by an optimizer with the potential first and second-order moments.
    6. 6.Update model parameters.

    In essence, FD-FPM minimizes the L1-distance (Manhattan distance) on the image feature domain, with the loss function given by LFD(S,P)=n=1NKInK|F1{S(uUn)*P(u)}|1,where K represents the invertible convolution kernel used for feature extraction. It has been verified that this loss function works well for various features. In our work, we utilize the edges of the first-order image to represent features, with K=[x,y]T.

    3 Results

    3.1 Multi-exposure Image Fusion of Raw Data

    The optimal performance of MEIF is achieved by merging five different exposures in this study. We designate the normal exposure as exposure value 0 (EV 0), with images acquired at EV −1, EV 0, EV +1, EV +2, and EV +4, which are subsequently fused. As different samples vary in thickness and color, their normal exposure times differ. To establish a universally applicable exposure metric across different samples, we defined this relative exposure parameter, where the normal exposure is determined through automatic exposure. This approach ensures consistent performance across all tested samples. The specific exposure times are provided in Table S2 in the Supplementary Material.

    When focusing on the raw data for EV0, as depicted in Fig. 3(a), the image is distinctly divided into bright and dark fields, outlined by the numerical aperture (NA). In addition, due to restrictions imposed by NA and the pupil, certain images exhibit nearly circular regions of semi-brightness and semi-darkness. Clearly, the information in the dark field is relatively weak, indicating a lack of information in this region. In Fig. 3(b), MEIF demonstrates a significant enhancement in the dark field of the raw data. Figures 3(c1)–3(c3) show the differences between the normally exposed images and those processed by MEIF. We have selected three distinct positions from the center to the periphery for illustration. It is evident that the enhancement of information in the dark field is substantial: at the same locations, single-exposure images appear almost entirely black with scarce details, whereas MEIF reveals the high-frequency details at the image edges much more clearly. Moreover, the bright-field images remain unaffected by overexposure artifacts due to the robustness of the network, confirming that our simple exposure strategy is both effective and safe.

    Comparison between raw data from normal exposure and MEIF results. (a) Stitched image of raw data from normal exposure based on illumination angles. (b) Stitched image of MEIF results based on illumination angles. (c1)–(c3) Comparison of representative illumination angles between normal exposure (left) and MEIF images (right), with relative positions marked by colored rectangles in panels (a) and (b).

    Figure 3.Comparison between raw data from normal exposure and MEIF results. (a) Stitched image of raw data from normal exposure based on illumination angles. (b) Stitched image of MEIF results based on illumination angles. (c1)–(c3) Comparison of representative illumination angles between normal exposure (left) and MEIF images (right), with relative positions marked by colored rectangles in panels (a) and (b).

    The lack of dark field information in the normal exposure data poses a considerable challenge for subsequent recovery. After MEIF, both the bright-field and dark-field images undergo certain changes. As mentioned earlier, the new bright-field images exhibit only adjustments in brightness and contrast, free from interference by overexposed artifacts, whereas the information in the dark field is substantially amplified and extracted. The improvement in information is very apparent, especially when the original data are amplified to the exposure level of MEIF-during, where its inherent noise and data incompleteness become significantly more pronounced (see Fig. S5 in the Supplementary Material). Therefore, the reconstruction results obtained with MEIF demonstrate significant enhancements in both intensity and phase compared with the original data, as further illustrated by the USAF resolution calculator and biological sample analyses.

    3.2 USAF Resolution Target

    The USAF resolution target, a widely used sample, plays a crucial role in demonstrating the quantitative effectiveness of the algorithm. The raw data presented in Fig. 3 were collected from a 4f system, with the objective lens set to 4× magnification and an NA of 0.1. Illumination was provided by a 17×17 LED array positioned 70 mm from the sample, with a 4 mm spacing between individual LEDs (see Supplement 1, Equipment 1 in the Supplementary Material).

    Figure 4(a1) presents the slicing-free recovery results obtained using FD-FPM and MEIF. Focusing on the region highlighted by the orange box, Fig. 4(a2) provides a zoomed-in view. The HDR method, however, fails to effectively utilize the intensity variations introduced by overexposure, resulting in detrimental effects on reconstruction. As shown in Fig. 4(a3), HDR suffers from intensity inversion due to its inability to properly handle overexposed data, leading to misalignment and visible artifacts, which are evident in Fig. 4(a4). By contrast, MEIF preserves the integrity of the reconstruction by avoiding the intensity inversion caused by overexposure, as further demonstrated by additional analysis in Supplement 1 in the Supplementary Material.

    Reconstruction results of the USAF target. (a1) Whole slide imaging (WSI) reconstruction with MEIF; (a2) zoomed-in view of the MEIF reconstruction; (a3) zoomed-in view with HDR; (a4) quantitative distribution corresponding to the lines in (a2) and (a3). (b1)–(b4) Phase reconstruction results with MEIF algorithm, magnified views, and the quantitative distribution along the indicated lines. (c1)–(c4) Phase reconstruction results with HDR algorithm, magnified views, and the quantitative distribution along the indicated lines.

    Figure 4.Reconstruction results of the USAF target. (a1) Whole slide imaging (WSI) reconstruction with MEIF; (a2) zoomed-in view of the MEIF reconstruction; (a3) zoomed-in view with HDR; (a4) quantitative distribution corresponding to the lines in (a2) and (a3). (b1)–(b4) Phase reconstruction results with MEIF algorithm, magnified views, and the quantitative distribution along the indicated lines. (c1)–(c4) Phase reconstruction results with HDR algorithm, magnified views, and the quantitative distribution along the indicated lines.

    Although the intensity reconstruction results of MEIF may be slightly inferior to traditional HDR methods, with a minor difference in the visible groups in the central region, MEIF ensures the continuity and accuracy of the reconstruction. By preventing intensity inversion, MEIF maintains spectral continuity and accuracy, which is more critical than merely improving intensity resolution. Furthermore, by effectively utilizing the high-frequency information embedded in overexposed data, MEIF significantly enhances phase information.

    The improvement in phase reconstruction is even more pronounced. MEIF provides a substantial enhancement in phase resolution, as demonstrated in the comparisons between Figs. 4(b1), 4(b2) and 4(c1), 4(c2), where the phase contrast is noticeably improved. This confirms that the MEIF algorithm more effectively leverages the information provided by high-angle illumination.

    Comparisons between Figs. 4(b3), 4(b4) and 4(c3), 4(c4) further illustrate that the phase recovery achieved by MEIF is much clearer, delivering unparalleled phase details, particularly in more complex samples. This advantage becomes even more apparent in subsequent tests conducted with biological specimens.

    3.3 Biological Tissues

    The ultimate goal of computational microscopy is to enhance performance in biomedical research, delivering practical benefits for real-world applications. Consequently, a simple USAF resolution target is insufficient for comprehensive validation. To demonstrate the advantages of MEIF in realistic scenarios, we evaluated its performance using biological samples. Specifically, we used the onion epidermis as the test sample and compared the intensity and phase results obtained by MEIF with those from the traditional HDR method. The raw data were acquired using a low-magnification objective (4×, 0.1 NA, Nikon, Tokyo, Japan), and the detailed experimental parameters are provided in Table S2 in the Supplementary Material.

    Figure 5(a) illustrates the slicing-free WSI reconstruction results of the onion epidermis with different preprocessing methods. Figure 5(b) presents the corresponding phase reconstruction of the same region. It is evident that the MEIF method provides the clearest details and contrast, both in intensity and phase images. To facilitate a closer examination, we carefully selected three information-rich regions of interest (ROIs) for comparison. By comparing the reconstructed results with the ground truth obtained using a high-magnification objective, we observed that the single HDR algorithm offers more detailed results than single-exposure reconstructions. However, MEIF produced significantly clearer intensity results with higher contrast, revealing more distinct shapes of cell nuclei and certain cellular textures, which closely resemble the ground truth. This demonstrates the efficiency and accuracy of the MEIF algorithm.

    Reconstruction results of the onion epidermis. (a) WSI intensity reconstruction; (b) WSI phase reconstruction; (c1, d1, e1) Ground truth for ROIs 1 to 3; (c2 to c7, d2 to d7, e2 to e7) Amplitude and phase reconstruction results for ROIs 1 to 3; (e8 to e14) Quantitative distribution for the line-scan regions in ROI 3 (e1 to e7), where the horizontal coordinate is 0 to 25 μm.

    Figure 5.Reconstruction results of the onion epidermis. (a) WSI intensity reconstruction; (b) WSI phase reconstruction; (c1, d1, e1) Ground truth for ROIs 1 to 3; (c2 to c7, d2 to d7, e2 to e7) Amplitude and phase reconstruction results for ROIs 1 to 3; (e8 to e14) Quantitative distribution for the line-scan regions in ROI 3 (e1 to e7), where the horizontal coordinate is 0 to 25μm.

    To quantitatively validate the superiority of MEIF, we isolated a specific texture within ROI 3d for detailed comparison. Notably, only the MEIF algorithm was able to reveal the clear undulations in this region, demonstrating its ability to capture subtle biological textures, as shown in Figs. 5(e10) and 5(e3). Such fine details remain undetectable using both the HDR and single-exposure algorithms [see Figs. 5(e2), 5(e8), 5(e4), 5(e10)]. Furthermore, to confirm the effectiveness of these enhanced details, we compared the reconstructed results with images acquired using a high-magnification objective (20×, 0.4 NA), serving as ground truth. As evident from the quantitative comparison figures [Figs. 5(e8)5(e11)], only the MEIF reconstruction successfully recovered the intricate details captured by the 20× objective, with even more pronounced contrast in some cases. This is attributed to the preservation of the 4× objective’s larger depth of field in MEIF reconstructions, which is significantly greater than that of the 20× objective. This inherent advantage of FPM imaging facilitates a more comprehensive and accurate observation of biological samples.

    Phase reconstruction of biological samples is a critical aspect of our study, and MEIF demonstrates superior performance in this domain. Using the same color bar to compare phase images obtained from the three methods, it is evident that MEIF consistently delivers sharper details, both in the full image and the zoomed-in ROIs. Notably, MEIF distinctly captures the central cell nucleus and surrounding structures in ROI 2, as well as the longitudinal textures in ROI 3. By contrast, the HDR and single-exposure methods produce blurred and indistinct features, which are effectively mitigated by MEIF. These results underscore the substantial improvements that MEIF offers in phase recovery, enabling the accurate preservation of intricate biological structures.

    4 Discussion

    4.1 Multi-exposure Image Fusion and Other Neural Network-Assisted Algorithms

    Neural networks have become widely used in computational microscopy; however, they face inherent limitations that hinder broad generalization. The two primary challenges are their lack of robust generalizability46,47 and their inability to efficiently process a large number of input images simultaneously.46

    MEIF addresses these challenges by focusing on the processing stages rather than the reconstruction stage, thereby circumventing the need to input large datasets into neural networks. Many other studies have tackled this issue through alternative imaging algorithms or by entirely bypassing the reconstruction stage.48,49 Moreover, this approach eliminates the reliance on specific post-processing steps that often require sample-specific training, a practice that tends to cause overfitting to similar or even identical samples, ultimately limiting generalizability to diverse sample types.49

    The use of two convolutional layers as feature extractors—one of the most versatile and widely adopted image fusion techniques44—combined with carefully selected processing stages, provides an effective and innovative approach for image fusion in computational microscopy. MEIF achieves remarkable generalization without requiring training on any microscopic imaging data, ensuring unbiased performance across diverse sample types.

    As demonstrated earlier, MEIF produced outstanding results with both biological samples and the USAF resolution target. To further validate its robustness, Fig. 6 presents a more challenging scenario involving a 19×19 LED array, a 456.7-nm blue light source, and a completely different sample-animal connective tissue. Despite these variations, MEIF maintains exceptional performance.

    FD-FPM reconstruction results of animal connective tissue: (a1)–(a4) Stitching-free reconstruction results after MEIF processing, where (a1) and (a2) represent the whole block recovery of intensity and phase results, respectively. (a3), (a4) Zoomed-in results of the ROIs, which are circled in the images. Similarly, (b1), (b2), (b3), and (b4) are the stitching-free recovery results for intensity and phase, along with zoomed-in ROI results. (c) The results directly captured by a higher-resolution objective (20×/0.75 NA). The reconstruction data are acquired using a lower-resolution (4×/0.1 NA, Nikon) objective.

    Figure 6.FD-FPM reconstruction results of animal connective tissue: (a1)–(a4) Stitching-free reconstruction results after MEIF processing, where (a1) and (a2) represent the whole block recovery of intensity and phase results, respectively. (a3), (a4) Zoomed-in results of the ROIs, which are circled in the images. Similarly, (b1), (b2), (b3), and (b4) are the stitching-free recovery results for intensity and phase, along with zoomed-in ROI results. (c) The results directly captured by a higher-resolution objective (20×/0.75 NA). The reconstruction data are acquired using a lower-resolution (4×/0.1 NA, Nikon) objective.

    For further comparison, we applied conventional FPM reconstruction to the same dataset, and the results (Fig. S6 in the Supplementary Material) show that MEIF consistently outperforms traditional methods, reaffirming its strong adaptability and generalization capability.

    By comparing Figs. 6(a3), 6(b3), and 6(c), it is evident that Fig. 6(a3) exhibits the most detailed and crispest edges. This once again confirms the reliability of MEIF and its ability to significantly enhance information content. The phase results in Figs. 6(a4) and 6(b4) further support this observation. The subdued phase in Figs. 6(a2) and 6(a4) stands in stark contrast to Figs. 6(b2) and 6(b4), where details and textures within the tissue are clearly visible in the phase obtained through MEIF. Similarly, Figs. 6(a1), 6(a3), 6(b1), and 6(b3) highlight the edges of the sample, showcasing a nonstitched FOV achieved through the combination of MEIF and FD-FPM. This combination provides a sharper intensity and richer phase, representing significant breakthroughs not observed in other algorithms.

    4.2 Validity and Effectiveness

    The reconstruction in this paper is performed by merging the raw data from five different exposures: EV −1, EV 0, EV +1, EV +2, and EV +4. Experimental results have demonstrated the effectiveness of this exposure combination. However, it is essential to adjust the exposure values based on the sample. At a minimum, we need to ensure the availability of a set of normally exposed photos, slightly overexposed photos, and photos with a higher degree of overexposure for merging. This ensures strong continuity in the data after maximum element fusion and excellent dark-field signal. In other words, all operations must align with our original intention of enhancing dark-field signals, ensuring signal richness while maintaining signal continuity to the greatest extent possible. This is a guarantee for achieving the maximization of information content in our algorithm. Of course, the quantitative discussion of exposure selection can be challenging, and the specifics should be based on the experimental environment and results.

    4.3 Limitations and Challenges

    In the Sec. 3, we highlighted the advantages of MEIF as a preprocessing module, including its improvements in image intensity, phase recovery, and robust generalizability. However, these benefits come at a cost. Compared with conventional methods, MEIF requires acquiring more data and longer exposure times, which significantly reduces the temporal resolution of the FPM system. Specifically, although a single acquisition set can be completed in 2  min, MEIF typically extends this to 6 min or more, posing challenges for applications involving rapidly moving live-cell samples.

    Moreover, although MEIF demonstrates substantial improvements in intensity for biological samples, its advantages are less pronounced for simpler targets such as the USAF resolution chart. Although MEIF effectively mitigates issues such as inversion and misalignment (as shown in Fig. S4 in the Supplementary Material), it offers limited enhancement in the uniformity of low-frequency information and does not extend the upper limit of high-frequency resolution. This suggests that MEIF may still be less effective for samples dominated by low-frequency information, where traditional methods can achieve comparable results.

    5 Conclusion and Outlook

    Experimental results have shown that both traditional FPM (see Supplement 1 in the Supplementary Material) and FD-FPM methods are noticeably influenced by MEIF. MEIF significantly enhances both the intensity and phase information in the reconstructed images.

    At the same time, MEIF also has some limitations and challenges, such as longer image acquisition times and less-than-ideal imaging performance for certain simple samples. We have observed that the effectiveness of MEIF lies not only in the efficient utilization of information but also in its capability to combine that information into a suitable imaging model. Consequently, future work on single-frame preprocessing and brightness allocation correction is highly anticipated as these improvements may help achieve better reconstruction results without increasing the acquisition time. In addition, investigating and addressing the fundamental causes of these imaging shortcomings represent a promising direction for further research.

    However, the achievements of MEIF represent only an initial step. More importantly, it introduces, for the first time in our observation, a highly generalizable CNN-based image fusion module integrated into computational microscopy, thereby extending the imaging pipeline and increasing overall data throughput. We anticipate breakthroughs in related microscopy fields in the future.

    In the domain of imaging fusion, particularly with MEIF, specialized models may emerge. One promising direction is to develop a multi-exposure fusion framework tailored to specific samples, potentially enhancing both speed and resolution. Another avenue is to explore image fusion across different dimensions, such as extending depth of field by combining multiple focal planes, which may alleviate current limitations in depth-of-field extension. In addition, the potential of similar approaches in ptychography applicable to microscopy, crystal diffraction imaging, and remote sensing further underscores the opportunity for analogous methods. Given these prospects, although our current focus is on improving FPM imaging quality with MEIF, we believe that this universal method could pave the way for maximizing information content in future research.

    Zhiping Wang is currently pursuing an MS degree in physics of life at the Biozentrum, University of Basel, Switzerland. He received his bachelor’s degree in physics from Lanzhou University, China, in 2024. His current research focuses on computational imaging and biophysics.

    Tianci Feng is a PhD student in optics at the Xi’an Institute of Optics and Precision Mechanics (XIOPM), Chinese Academy of Sciences (CAS), China. He received his bachelor’s degree in mechanical engineering, Sichuan Agricultural University, China, in 2021. His current research focuses on Fourier ptychographic microscopy.

    Aiye Wang is a PhD student in optics at the XIOPM, CAS, China. He received his bachelor’s degree in communication engineering, Soochow University, China, in 2020. His current research focuses on Fourier ptychographic microscopy.

    Jinghao Xu is a PhD student in optics at the XIOPM, CAS, China. He received his bachelor’s degree in mechanical engineering and automation from Xi’an Jiaotong University, China, in 2021. His current research focuses on Fourier ptychographic microscopy.

    An Pan is an associate professor and a principal investigator at the XIOPM, CAS, China, and the head of the Pioneering Interdiscipline Center of the State Key Laboratory of Transient Optics and Photonics. He received his bachelor’s degree in electronic science and technology from Nanjing University of Science and Technology (NJUST), China, in 2014, and he obtained his PhD in optical engineering at the XIOPM, CAS, China, in 2020. He was a visiting graduate at the Bar-Ilan University, Israel, in 2016 and at the California Institute of Technology (Caltech), United States, from 2018 to 2019, respectively. His current research focuses on the computational optical imaging and biophotonics and is among the first to work on Fourier ptychography. He was selected as the 2024 Optica Ambassador and was the winner of the 2021 Forbes China 30 Under 30 List, 2021 Excellent Doctoral Dissertation of CAS, 2020 Special President Award of CAS, 2019 OSA Boris P. Stoicheff Memorial Scholarship, the 1st Place Poster Award of the 69th Lindau Nobel Laureate Meetings in Germany (Lindau Scholar), and 2017 SPIE Optics and Photonics Education Scholarship. He has published 40 peer-reviewed journal papers and is a referee for more than 40 peer-reviewed journals. He is an early career member of Optica and SPIE.

    References

    [1] B. Sun et al. 3D computational imaging with single-pixel detectors. Science, 340, 844-847(2013). https://doi.org/10.1126/science.1234454

    [2] Z. Liu et al. All-fiber high-speed image detection enabled by deep learning. Nat. Commun., 13, 1433(2022). https://doi.org/10.1038/s41467-022-29178-8

    [3] X. Li et al. A multi-frame image super-resolution method. Signal Process., 90, 405-414(2010). https://doi.org/10.1016/j.sigpro.2009.05.028

    [4] X. Liu et al. Non-line-of-sight imaging using phasor-field virtual wave optics. Nature, 572, 620-623(2019). https://doi.org/10.1038/s41586-019-1461-3

    [5] S. Li et al. Lensless camera: unraveling the breakthroughs and prospects. Fundam. Res.(2024). https://doi.org/10.1016/j.fmre.2024.03.019

    [6] A. F. Coskun, A. Ozcan. Computational imaging, sensing and diagnostics for global health applications. Curr. Opin. Biotechnol., 25, 8-16(2014). https://doi.org/10.1016/j.copbio.2013.08.008

    [7] L. Lu et al. Hybrid brightfield and darkfield transport of intensity approach for high-throughput quantitative phase microscopy. Adv. Photonics, 4, 056002(2022). https://doi.org/10.1117/1.AP.4.5.056002

    [8] P. F. Gao, G. Lei, C. Z. Huang. Dark-field microscopy: recent advances in accurate analysis and emerging applications. Anal. Chem., 93, 4707-4726(2021). https://doi.org/10.1021/acs.analchem.0c04390

    [9] A. Pan et al. Subwavelength resolution Fourier ptychography with hemispherical digital condensers. Opt. Express, 26, 23119-23131(2018). https://doi.org/10.1364/OE.26.023119

    [10] Z. F. Phillips, R. Eckert, L. Waller. Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography, IW4E–5(2017).

    [11] G. Zheng, R. Horstmeyer, C. Yang. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photon., 7, 739-745(2013). https://doi.org/10.1038/nphoton.2013.187

    [12] X. Ou et al. Quantitative phase imaging via Fourier ptychographic microscopy. Opt. Lett., 38, 4845-4848(2013). https://doi.org/10.1364/OL.38.004845

    [13] J. Sun et al. Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography. Opt. Lett., 43, 3365-3368(2018). https://doi.org/10.1364/OL.43.003365

    [14] L. Tian et al. Computational illumination for high-speed in vitro Fourier ptychographic microscopy. Optica, 2, 904-911(2015). https://doi.org/10.1364/OPTICA.2.000904

    [15] L. Bian et al. Content adaptive illumination for Fourier ptychography. Opt. Lett., 39, 6648-6651(2014). https://doi.org/10.1364/OL.39.006648

    [16] J. Sun et al. High-speed Fourier ptychographic microscopy based on programmable annular illuminations. Sci. Rep., 8, 7669(2018). https://doi.org/10.1038/s41598-018-25797-8

    [17] Y. Xiao et al. High-speed Fourier ptychographic microscopy for quantitative phase imaging. Opt. Lett., 46, 4785-4788(2021). https://doi.org/10.1364/OL.428731

    [18] Z. Bian, S. Dong, G. Zheng. Adaptive system correction for robust Fourier ptychographic imaging. Opt. Express, 21, 32400-32410(2013). https://doi.org/10.1364/OE.21.032400

    [19] A. Pan et al. Vignetting effect in Fourier ptychographic microscopy. Opt. Lasers Eng., 120, 40-48(2019). https://doi.org/10.1016/j.optlaseng.2019.02.015

    [20] A. Pan et al. System calibration method for Fourier ptychographic microscopy. J. Biomed. Opt., 22, 096005(2017). https://doi.org/10.1117/1.JBO.22.9.096005

    [21] P. Song et al. Full-field Fourier ptychography (FFP): spatially varying pupil modeling and its application for rapid field-dependent aberration metrology. APL Photonics, 4, 050802(2019). https://doi.org/10.1063/1.5090552

    [22] X. Ou, G. Zheng, C. Yang. Embedded pupil function recovery for Fourier ptychographic microscopy. Opt. Express, 22, 4960-4972(2014). https://doi.org/10.1364/OE.22.004960

    [23] Z. Tian et al. Optical remote imaging via Fourier ptychography. Photon. Res., 11, 2072-2083(2023). https://doi.org/10.1364/PRJ.493938

    [24] R. Claveau et al. Digital refocusing and extended depth of field reconstruction in Fourier ptychographic microscopy. Biomed. Opt. Express, 11, 215-226(2020). https://doi.org/10.1364/BOE.11.000215

    [25] F. Xu et al. Fourier ptychographic microscopy 10 years on: a review. Cells, 13, 324(2024). https://doi.org/10.3390/cells13040324

    [26] M. Liang et al. All-in-focus fine needle aspiration biopsy imaging based on Fourier ptychographic microscopy. J. Pathol. Inform., 13, 100119(2022). https://doi.org/10.1016/j.jpi.2022.100119

    [27] A. Williams et al. Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis. J. Biomed. Opt., 19, 066007-066007(2014). https://doi.org/10.1117/1.JBO.19.6.066007

    [28] M. Valentino et al. Beyond conventional microscopy: observing kidney tissues by means of Fourier ptychography. Front. Physiol., 14, 206(2023). https://doi.org/10.3389/fphys.2023.1120099

    [29] J. Kim et al. Incubator embedded cell culture imaging system (EMSight) based on Fourier ptychographic microscopy. Biomed. Opt. Express, 7, 3097-3110(2016). https://doi.org/10.1364/BOE.7.003097

    [30] A. C. Chan et al. Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 eyes). Sci. Rep., 9, 11114(2019). https://doi.org/10.1038/s41598-019-47146-z

    [31] A. Pan et al. In situ correction of liquid meniscus in cell culture imaging system based on parallel Fourier ptychographic microscopy (96 eyes)(2019).

    [32] O. Akcakr et al. Automated wide-field malaria parasite infection detection using Fourier ptychography on stain-free thin-smears. Biomed. Opt. Express, 13, 3904-3921(2022). https://doi.org/10.1364/BOE.448099

    [33] T. Aidukas, L. Loetgering, A. R. Harvey. Addressing phase-curvature in Fourier ptychography. Opt. Express, 30, 22421-22434(2022). https://doi.org/10.1364/OE.458657

    [34] S. Zhang et al. FPM-WSI: Fourier ptychographic whole slide imaging via feature-domain backdiffraction. Optica, 11, 634-646(2024). https://doi.org/10.1364/OPTICA.517277

    [35] S. Zhang, T. T. Berendschot, J. Zhou. ELFPIE: an error-laxity Fourier ptychographic iterative engine. Signal Process., 210, 109088(2023). https://doi.org/10.1016/j.sigpro.2023.109088

    [36] Y. Zhang, X. Bai, T. Wang. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fusion, 35, 81-101(2017). https://doi.org/10.1016/j.inffus.2016.09.006

    [37] J. Zhang et al. Efficient colorful Fourier ptychographic microscopy reconstruction with wavelet fusion. IEEE Access, 6, 31729-31739(2018). https://doi.org/10.1109/ACCESS.2018.2841854

    [38] H. Gao et al. Redundant information model for Fourier ptychographic microscopy. Opt. Express, 31, 42822-42837(2023). https://doi.org/10.1364/OE.505407

    [39] C. Zuo, J. Sun, Q. Chen. Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy. Opt. Express, 24, 20724-20744(2016). https://doi.org/10.1364/OE.24.020724

    [40] S. Jiang et al. Spatial- and Fourier-domain ptychography for high-throughput bio-imaging. Nat. Protoc., 18, 2051-2083(2023). https://doi.org/10.1038/s41596-023-00829-4

    [41] E. Reinhard et al. High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting(2010).

    [42] H. Zhang et al. Image fusion meets deep learning: a survey and perspective. Inf. Fusion, 76, 323-336(2021). https://doi.org/10.1016/j.inffus.2021.06.008

    [43] F. Xu et al. Multi-exposure image fusion techniques: a comprehensive review. Remote Sens., 14, 771(2022). https://doi.org/10.3390/rs14030771

    [44] Y. Zhang et al. IFCNN: a general image fusion framework based on convolutional neural network. Inf. Fusion, 54, 99-118(2020). https://doi.org/10.1016/j.inffus.2019.07.011

    [45] J. R. Fienup. Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett., 3, 27-29(1978). https://doi.org/10.1364/OL.3.000027

    [46] T. Nguyen et al. Deep learning approach for Fourier ptychography microscopy. Opt. Express, 26, 26470-26484(2018). https://doi.org/10.1364/OE.26.026470

    [47] A. Saha et al. LWGNet-learned Wirtinger gradients for Fourier ptychographic phase retrieval. Lect. Notes Comput. Sci., 13667, 522-537(2022). https://doi.org/10.1007/978-3-031-20071-7_31

    [48] Y. Xue et al. Reliable deep-learning-based phase imaging with uncertainty quantification. Optica, 6, 618-629(2019). https://doi.org/10.1364/OPTICA.6.000618

    [49] X. Wang et al. Fourier ptychographic microscopy reconstruction method based on residual transfer networks. J. Phys. Conf. Ser., 2400, 012015(2022). https://doi.org/10.1088/1742-6596/2400/1/012015