Abstract
1. INTRODUCTION
Single-photon avalanche diodes (SPADs) are highly sensitive optical detectors that can detect and count individual photons. Recently, SPAD arrays, which contain thousands to millions of individual detectors, can be manufactured at scale via current CMOS technology and are entering the larger commercial market [1]. Coupled with a pulsed light source, SPAD array cameras are often implemented in “time-correlated single-photon counting” (TCSPC) schemes that offer picosecond-scale detection resolution for photon time-of-arrival. This has led to impressive 3D imaging capabilities for autonomous vehicles [2], fluorescence lifetime imaging experiments in microscopy [3], as well as computational imaging tasks such as three-dimensional imaging and ranging [4,5], high-speed correction [6], imaging/sensing through scattering media such as fog or tissue [7–9], transient image sensing [10], and forming images of objects hidden by occlusions or around corners [11]. One current limitation of existing SPAD array cameras, as compared to traditional charged-coupled device (CCD) or CMOS camera sensors, is a limited total pixel count. While recent academic work has demonstrated SPAD arrays with up to one million pixels [12,13], commercially available SPAD arrays typically include hundreds to thousands of pixels. CCD or CMOS sensors, on the other hand, are now commonly available with tens of megapixels. If the goal is to form a high-resolution image over a large area with an SPAD array camera, then one typically has to resort to some form of mechanical scanning, either of the image sensor, imaging assembly, or the object of interest, to produce a high pixel count final image [14]. Such mechanical scanning can be slow and lengthens the imaging process.
One alternative method that we study here, termed Fourier ptychography (FP) [15,16], can improve image resolution without requiring any mechanical scanning. FP is a synthetic aperture imaging technique that acquires standard intensity-only images of an object under illumination from different angles, as shown in Figs. 1(a) and 1(b). An iterative phase retrieval algorithm [17] is then used to convert the sequence of captured low-resolution images into a high-resolution image reconstruction, while simultaneously recovering the missing phase of the scattered light. In the past, FP has primarily been implemented with an LED array for illumination [see Fig. 1(a)] to improve the space–bandwidth product [15] of microscopes by a factor of or more, leading to gigapixel-sized high-resolution image reconstructions over large fields of view. FP also recovers the sample’s phase and provides the capability to correct for lens aberrations [18], which has led to low-cost yet high-performance digital microscopes [19,20], and also extends to macroscopic imaging of large objects [21,22]. Over the past several years, FP has been implemented with several types of imaging sensor, including standard CMOS [15], scientific CMOS (sCMOS) [23], intensified CCD (ICCD) cameras [24], and low-cost mobile phone sensors [20]. As SPAD arrays offer a limited pixel count and are also commonly used with rapid active illumination schemes, it appears that FP offers a useful means to improve SPAD array imaging performance in a procedure that is compatible with many use cases.
Figure 1.(a) Schematic of an FP imaging system. Time-sequential illumination from different angles is provided by an illumination array. Each LED shifts a unique part of the sample’s Fourier spectrum [shown by circles in (b)] into the imaging lens, producing corresponding bandpass-filtered sample images [example images in (d)]. The three LEDs are chosen as examples in (a), (b), (d), and (e) with a common color scheme. (c) Both operating modes of SPAD array cameras can generate binary images. (d) Experimental FP data and reconstructions using 12-bit image data. (e) Quantized FP performed with binary image measurements provides reconstruction image quality closely matching 12-bit data in (d).
In this work, we aim to study the feasibility of performing FP using SPAD array cameras. To begin down this path, we must consider and account for a relatively unique characteristic of SPAD array cameras—a significantly limited image bit depth—which is the primary focus of this investigation. There are two general modes of SPAD array cameras, as outlined in Fig. 1(c): (1) the photon-counting mode, which simply counts photon-triggered events, and (2) the TCSPC mode, which records and digitizes time-of-arrival information about photon-triggered events. Both of these modes effectively report low dynamic range or binary spatial intensity information.
Sign up for Photonics Research TOC. Get the latest issue of Photonics Research delivered right to you!Sign up now
There has been prior work considering the dynamic range of images in such computational imaging setups. For example, work within compressive sensing [25–27] has examined the dynamic range problem at the extreme of binary image measurements. In addition, several recent works have considered the general problem of phase retrieval from binary measurements in mathematical context [28–30]. Likewise, work has explored phase retrieval from measured intensities of speckle quantized to 1-bit [31]. However, none of these prior studies consider synthetic aperture imaging-type scenarios such as FP, which uses multiple low-resolution binary images to generate a reconstructed image that simultaneously has a higher resolution and higher dynamic range. Prior work with FP has indirectly accounted for dynamic range challenges with a number of strategies, for example, by varying the image exposure time for each LED [32], ensuring high-dynamic-range capture with an sCMOS camera [23], or employing an high dynamic range (HDR) image capture method with multiple exposures per LED [15]. However, without
Here, we present a new “quantized” FP image reconstruction procedure that maps low-dynamic-range image data to high-resolution complex-valued image reconstructions. By accounting for each acquired image’s bit depth and exposure duration, we verify both in simulation and experiment that FP can effectively produce a high-dynamic-range reconstruction at expected resolution from raw binary intensity images. We also experimentally demonstrate FP using SPAD array image data, pointing the way toward implementation with novel 3D ranging and time-gating imaging applications in the future. However, our findings here may also be used to improve any general FP imaging setup on standard image sensors as well, as sensor dynamic range is often an important factor that has received limited attention to date.
2. METHODS
A. Quantized FP Forward Model
In FP, we assume that we are imaging a thin sample represented by a complex function , with amplitude describing the sample’s absorption, and phase describing its imparted phase shift due to the sample’s thickness variation. We treat the illumination generated by each LED at the sample plane as a quasi-monochromatic plane wave of uniform intensity that travels at a particular angle : , where denotes light from the th LED. Since we have a thin sample, the resultant field emerging from the sample surface is given by the elementwise product . This field then propagates through the lens pupil plane (i.e., back focal plane) and then onto the image sensor. Following principles of Fourier optics, we can represent this field directly before the pupil plane as the Fourier transform of the above product: . Assuming the pupil has a transmission function , the field immediately after the pupil plane is given by the product of and . Finally, by modeling propagation to the image sensor with an inverse Fourier transform, we can denote the entire expression for formation of the image as
Here, the product of and the exposure follows from the reciprocity property [33] and is a constant scalar that accounts for factors involved in the mapping of radiant energy per pixel to an electronic signal (pixel fill factor, quantum efficiency, etc.). In the above two equations, we assume that both shot noise and sensor read noise are negligible so that the two scenarios (standard FP and quantized FP models) can be easily compared. is a quantization function that maps a continuous quantity to a discrete set of positive integer values within the range , which corresponds to bits of information:
B. Adaptive Threshold Method
The above forward model provides a straightforward expression for discretized images captured on limited dynamic range sensors. However, it assumes
There are three general issues surrounding the experimental specification of . First, the maximum intensity reaching the image sensor for each LED, , is typically an unknown quantity and must be estimated. Second, this quantity will clearly vary as a function of LED illumination angle, as LEDs toward the optical axis will often generate brighter images than those illuminating the sample from higher angles. And third, in many imaging scenarios, image saturation can actually be beneficial. An extreme example of this is in the binary image detection case (e.g., with an SPAD array). In this extreme case, from an information collection perspective, it should typically be desirable to select a relative exposure time to ensure that approximately half of the pixels output a quantized measurement of 1, while the remaining pixels output 0. Satisfying this condition will typically require overexposure (i.e., saturation) of a large fraction of pixels. To explore the above issues, we consider four different methods of selecting the relative exposure time for each captured image (see Fig. 2).
Figure 2.Example images from a simulation dataset demonstrating four exposure strategies considered in this work based on 32 by 32 LED array. LED positions for selected images (a), (b), (c) are shown in (e). The full high-dynamic-range nature of the dataset can be observed in (1). In (2),
This “median” exposure time method could be implemented with a variety of binary imaging schemes that utilize a separate photodetector to trigger the exposure on/off period, for example.
As shown in Fig. 2, each strategy above will expand or contract the histogram of measured intensities in a unique way. In this example, it can be seen that
C. Quantized FP Reconstruction Algorithm
Figure 3.Flowchart of the proposed quantized FP reconstruction algorithm. Each step is detailed in Section
Figure 1.(a) Schematic of an FP imaging system. Time-sequential illumination from different angles is provided by an illumination array. Each LED shifts a unique part of the sample’s Fourier spectrum [shown by circles in (b)] into the imaging lens, producing corresponding bandpass-filtered sample images [example images in (d)]. The three LEDs are chosen as examples in (a), (b), (d), and (e) with a common color scheme. (c) Both operating modes of SPAD array cameras can generate binary images. (d) Experimental FP data and reconstructions using 12-bit image data. (e) Quantized FP performed with binary image measurements provides reconstruction image quality closely matching 12-bit data in (d).
In stage 1, the Fourier spectrum estimate from the previous iteration, , is shifted according to the illumination angle of the th LED as . It is then low-pass filtered by the pupil function and Fourier transformed to generate an estimate of the complex field at the image plane . The intensity of this complex field can then be calculated as , which corresponds to the forward model shown in Eq. (2) and is an estimate of the high-dynamic-range ground truth image (at the full precision of the reconstruction software). However, this dynamic range rarely matches that of the image sensor used in experiments, where a quantized low-dynamic-range image , of the ground truth image intensity , is recorded instead. To account for this discrepancy, our proposed algorithm instead computes a quantized image that is produced by the assumed detector, as , which corresponds to the forward model in Eq. (5). We hypothesize that this quantized image estimate provides a useful estimate of the experimentally captured and quantized image, . The goal of the quantized FP algorithm is to have match upon its computation.
In stage 2, the amplitude of the estimated complex field at the image plane, , is updated using experimentally captured data. In standard FP, is updated by directly replacing the estimated amplitudes with measured amplitudes via the following equation:
Here, is the updated complex field in the image plane and is the phase of . Note that Eq. (8) does not consider image quantization—that is, the measurement is directly employed for reconstruction update and is not compared to the image estimate in quantized form. In our proposed model, we modify this update step to ensure effective and accurate algorithm convergence with, for example, binary SPAD array data. Specifically, we account for the discrepancy between the current algorithm’s quantized image estimate and the experimentally captured and quantized image to promote accurate image update and account for update inconsistencies [34]. We introduce an indicator called error map between actual and estimated discretized images as follows:
If is zero at a particular pixel, the corresponding estimate ’s associated detected and discretized intensity value at that pixel has accurately converged to the correct quantized value. Hence, we assume that particular value for no longer requires an intensity update. Otherwise, an update is still necessary. To incorporate this observation into ptychography’s iterative phase retrieval cycle, quantized FP replaces Eq. (8) with the following update function:
In the binary case, this modification ensures that at every iteration the estimated image’s amplitude is either updated with the appropriate value or remains the estimated value, with the latter case being when the estimate’s quantized pixel intensity value agrees with experimental measurement. In stage 3, and are used to update the Fourier spectrum estimate of the sample as follows:
3. VALIDATION
A. Simulation
To validate our proposed quantized FP reconstruction algorithm and the adaptive exposure strategy for low-dynamic-range and binary image datasets, we first performed a series of simulations. First, we simulated imaging of standard resolution test targets and natural samples with both 8-bit and 1-bit image sensors to demonstrate the resolution improvement and reconstruction quality. Second, we use several open-source pathology images as samples to compare the average performance of our proposed reconstruction method to standard methods, and to examine the performance of each proposed exposure strategy. Finally, we study the influence of the overlap ratio of Fourier sub-spectra on reconstruction quality.
Simulations assumed an imaging NA of 0.1 with magnification and 2.4 μm detector pixel size. We assumed a 0.205 illumination NA provided by a LED illumination array (225 low-resolution images captured in total). These parameters are used for all the simulations except for the overlap influence simulations, where the illumination NA is kept constant, but the total number of LEDs and their pitch are varied to provide different overlap values. Poisson noise was added to the low-resolution images generated in the simulation before discretization. To evaluate quality of the complex reconstruction results, we used the following normalized mean square error [36] metric:
Here, is the normalized reference complex image, is the normalized reconstructed complex image, and is a factor to account for arbitrary constant reconstruction phase offset [36].
Simulation results in Fig. 4 demonstrate the resolution improvement and image quality for cases using 1-bit images captured with the median exposure strategy and 8-bit images using the max exposure strategy. An amplitude-only USAF target and a phase-only Siemens star target were used to verify resolution improvement, where the expected resolution cutoff is 2.04 μm per line pair (lp). In this simulation, group 1 element 1 in the USAF target corresponds to approximately 2 μm per line pair, while group 1 element 2 is 1.78 μm per line pair. We can see that both standard FP and our proposed quantized FP reconstruction using binary data can resolve the former element but not the latter, achieving maximum expected resolution performance. The contrast ratios between binary and 8-bit in Fig. 4(a), columns (1) and (2) are 0.998 and 0.995. A contrast-varying resolution target [Fig. 4(a3)] highlights how the complex reconstruction achieved with an 8-bit image dataset maintains higher resolution in areas of low object contrast (yellow line refers to 19% contrast compared to the maximum sinusoid contrast at right), as compared to the 1-bit image dataset reconstruction. However, the 1-bit dataset reconstruction still offers full resolution improvement to the expected cutoff at slightly higher contrast (34% orange line). Finally, natural sample reconstruction quality looks almost identical in both the 8-bit and binary imaging cases, with similar normalized mean square error (NMSE) values (0.010 and 0.042, respectively). These results first demonstrate that FP is possible with binary measurements, despite significantly altered and deteriorated captured image data [see Fig. 4 column (4)]. Somewhat surprisingly, the quality of high-resolution reconstruction using quantized FP appears comparable to the standard FP case with 8-bit images.
Figure 4.Simulation results with quantized FP reconstruction algorithm. (a) Test targets used to assess reconstruction performance with 8-bit imaging (left) and binary 1-bit imaging (right). (b) Example natural scene with unique amplitude and phase. For all cases, reconstruction of high-resolution amplitude and phase from binary image measurements closely matches that from 8-bit image measurement, despite significant decrease in detected information.
A comparison between the standard FP reconstruction algorithm [Eq. (8)] and the proposed quantized FP algorithm [Eq. (10)] is performed in Fig. 5(a) assuming the use of the
Figure 5.NMSE performance of quantized FP algorithm as a function of raw image bit depth, evaluated using a pathology dataset consisting of 30 unique samples. Each point in plots (a) and (b) is average NMSE across all 30 samples. An example low-resolution image and reconstruction pair is shown for 1-, 2-, 3-, and 8-bit raw image datasets captured with the
As mentioned in the previous section, FP algorithm performance for high-resolution reconstruction depends upon various parameters. A crucial parameter is the redundancy in captured data. As a phase retrieval method, FP must sample more pixels than that it reconstructs. While increasing data redundancy is not usually ideal (e.g., it increases data capture time and memory requirements), a minimum redundancy criterion should be met for ptychographic image reconstruction strategies [37]. To investigate the impact of this parameter on quantized FP, we varied the number of captured images by varying the number reconstruction pair is shownof assumed illumination sources provided in the illumination array. This allowed us to keep the illumination numerical aperture constant but vary the amount of image dataset redundancy. For each illumination configuration, we then computed FP reconstruction error (NMSE) for a number of different assumed image sensor bit depths. We summarize data redundancy by computing the overlap percentage between adjacent captured areas which are shifted by in the Fourier spectrum domain, where is the angular shift between adjacent illumination sources. The results of this analysis are plotted in Fig. 6, where each point is an average of NMSE reconstruction across a set of 12 simulated complex samples (from open-source Cell Image Library [38]). Here, it can be seen that the low-bit images (1–2 bits) require much higher overlap and thus greater data redundancy for accurate reconstruction. All bit depths higher than 4 exhibit relatively similar performance. This is partially attributed to our quantized FP algorithm and our use of the proposed
Figure 6.Simulation results comparing the influence of FP overlap ratio on reconstruction quality using the
B. Hybrid Experiment–Simulations
Before testing quantized FP with an SPAD array, we first performed a series of preliminary experiments with a standard CMOS camera (Basler 12-bit CMOS sensor, acA4024-29um) to capture FP datasets, whose image data was manipulated post-capture to exhibit a variety of different bits per pixel. The FP imaging system in these experiments used a 0.1 NA objective lens at magnification with a 0.42 illumination NA. We used red LEDs from the
Figure 7.Experimental results from an FP setup using a 12-bit CMOS image sensor, whose image data was binarized post-capture before reconstruction. Quantized FP algorithm reconstructions using binarized image data (
4. FP WITH SPAD ARRAY CAMERAS
A. Setup
To demonstrate quantized FP with an SPAD array camera, we utilized the PF32 SPAD array camera from Photon Force Ltd., which has pixels with a pixel pitch of 50 μm and 1.5% fill factor. A 0.1 NA Olympus plan achromatic objective with a 600 mm focal length tube lens was used for imaging and to provide magnification for Nyquist sampling. The field of view (FOV) of the SPAD array camera is extremely small due to the limited number of pixels available, so a regular camera (Basler acA4024-29um) was also used to help with system alignment. A 90:10 beam splitter was inserted after the tube lens and the regular camera was mounted on the 10 percent reflection side. An Adafruit P4 LED array placed 84 mm behind the sample provided illumination. We used a 632 nm red LED array to provide a 0.18 illumination NA.
SPAD array cameras are typically used with high-powered pulsed optical sources for LIDAR, for example. In our case, we chose LED array illumination for demonstration simplicity. These LEDs are effectively continuous-wave and do not provide the high power required for the optimal SPAD array camera operation, especially at the large source-sample separation that our setup necessitated. Even in a standard FP setup, exposure times up to 1 s are required to achieve sufficient signal-to-noise ratio (SNR) when using such LED arrays [15]. Despite the single photon sensitivity of the SPAD array camera, its low fill factor (1.5%) still required the use of relatively lengthy exposure times (up to 1 s or more for dark-field). This was only possible in the photon counting mode, so we chose this mode for our initial demonstrations in the following experiments.
B. Results
Before we performed FP with the SPAD array camera, we executed a series of operations to account for the SPAD array’s read noise, which are detailed in Appendix A. First, we captured a sequence of short-exposure images (3 μs) per illumination angle, as opposed to a single long exposure. This provided us with the opportunity to capture more photons and get higher SNR. Second, we also captured a matching dataset without a sample present, which we used for hot-pixel noise subtraction.
Completing these two steps per image capture session resulted in relatively high-quality SPAD array image datasets that we subsequently processed with our quantized FP algorithm on several different targets (see Fig. 8). First, we imaged a USAF resolution test target to validate the quantitative improvement in SPAD array imaging resolution. Low-resolution HDR images were able to resolve group 7 element 3 (6.2 μm full-pitch resolution). Applying the standard FP reconstruction algorithm with this image data led to image reconstructions in which we could resolve group 8 element 6 (2.2 μm full-pitch resolution), showing an approximate improvement in resolution. This closely matches the expected resolution gain for this FP setup. There are several primarily low-frequency artifacts in the reconstruction due in part to large detector read noise. The quantized FP reconstruction formed by processing binary SPAD array image data is also able to resolve group 8 element 6, albeit at a lower contrast and with larger noise-induced low-frequency artifacts, which is expected following our simulation results.
Figure 8.Experimental results from the SPAD array camera. Comparison between the standard FP on the HDR dataset and the quantized FP on the binarized dataset with the
We also imaged a Siemens star resolution target and performed FP reconstructions in a similar manner, with results (
5. DISCUSSION AND CONCLUSION
SPAD array cameras are an exciting new type of single-photon sensitive sensors with high-speed time-gating capabilities. In this work, we studied the feasibility of performing Fourier ptychography with such sensors to jointly improve their imaging resolution and retrieve the unknown phase of the imaged field. To achieve this effectively on these new sensors, we have presented an FP forward model that accounts for the finite bit depth of an image sensor and image exposure selection, and a new quantized FP reconstruction algorithm that accounts for their finite bit depth during iterative reconstruction. This results in the surprising ability to form high-quality complex-valued image reconstructions using binary-valued image data. However, we stopped short of presenting an implementation that utilizes their time-of-flight capability. This extension would most likely require the use of alternative illumination sources (as opposed to LEDs), which would also help to address the imaging noise issues that our demonstration faced to potentially improve resolution enhancement performance. As an alternative to the SPAD array used in this work, large-format quanta image sensors [43] could also be adopted for future implementations of quantized FP. CMOS SPAD quanta image sensors can collect single-photon data with 61% fill factor and 100,000 frames per second [44], for example, which offers a significantly higher fill factor than the SPAD array utilized in our experiments. Such alternative sensors, as well as large-format SPAD arrays with small pixel pitch and high fill factor [45], are currently beginning to enter the commercial market and can also be utilized to collect highly quantized or single-photon image frames for quantized FP. As the pixel count and fill factor of such detectors continue to improve, we are hopeful that future implementations of quantized FP can improve upon our preliminary experiments in this work. There are several direct ways to extend our current demonstration to utilize time-of-flight measurements. Most directly, photons from a particular time range may be selected to form a set of binary images for depth-selective resolution improvement and phase measurement, which may prove particularly helpful when imaging within a scattering environment. Alternatively, one may utilize the time-of-flight image data to improve high-resolution 3D image reconstruction. For this latter goal, one may adopt a 3D imaging forward model for FP, which has been considered in prior work within the context of microscopy [46–48] but not yet within the more macroscopic regime with current SPAD technology’s millimeter-scale depth resolution. Given that FP can currently reconstruct 3D depth information both via phase measurement and by tomographic principles, however, adding in time-of-flight information offers an interesting possibility for future development.
Acknowledgment
Acknowledgment. The authors would like to thank Kernel Inc. for their kind assistance.
APPENDIX A: SPAD ARRAY CAMERA READ NOISE
We performed a series of tests to examine the SPAD array read noise in the photon counting mode to settle on our final image capture strategy for this work’s preliminary experiments. Our illumination strategy required relatively long exposure times for the results shown in Section
Figure 9.(a1), (a2) Example raw SPAD array data captured and (b1), (b2) show the denoised versions. (c1) and (c2) shows the SNR for selected bright-field and dark-field images.
References
[1] C. Bruschini, H. Homulle, I. Antolovic, S. Burri, E. Charbon. Single-photon avalanche diode imagers in biophotonics: review and outlook. Light Sci. Appl., 8, 87(2019).
[2] D. Bronzi, Y. Zou, F. Villa, S. Tisa, A. Tosi, F. Zappa. Automotive three-dimensional vision through a single-photon counting SPAD camera. IEEE Trans. Intell. Transp. Syst., 17, 782-795(2015).
[3] L. Parmesan, N. Dutton, N. J. Calder, N. Krstajic, A. J. Holmes, L. A. Grant, R. K. Henderson. A 256 × 256 SPAD array with in-pixel time to amplitude conversion for fluorescence lifetime imaging microscopy. International Image Sensor Workshop, 900, M5(2015).
[4] A. Gupta, A. Ingle, A. Velten, M. Gupta. Photon-flooded single-photon 3D cameras. IEEE Conference on Computer Vision and Pattern Recognition, 6770-6779(2019).
[5] S. Chan, A. Halimi, F. Zhu, I. Gyongy, R. K. Henderson, R. Bowman, S. McLaughlin, G. S. Buller, J. Leach. Long-range depth imaging using a single-photon detector array and non-local data fusion. Sci. Rep., 9, 8075(2019).
[6] S. Ma, S. Gupta, A. C. Ulku, C. Bruschini, E. Charbon, M. Gupta. Quanta burst photography. ACM Trans. Graph., 39, 79(2020).
[7] G. Satat, B. Heshmat, D. Raviv, R. Raskar. All photons imaging through volumetric scattering. Sci. Rep., 6, 33946(2016).
[8] A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, D. Faccio. Computational time-of-flight diffuse optical tomography. Nat. Photonics, 13, 575-579(2019).
[9] W. Liu, R. Qian, S. Xu, P. Chandra Konda, J. Jönsson, M. Harfouche, D. Borycki, C. Cooke, E. Berrocal, Q. Dai, H. Wang, R. Horstmeyer. Fast and sensitive diffuse correlation spectroscopy with highly parallelized single photon detection. APL Photonics, 6, 026106(2021).
[10] M. O’Toole, F. Heide, D. B. Lindell, K. Zang, S. Diamond, G. Wetzstein. Reconstructing transient images from single-photon sensors. IEEE Conference on Computer Vision and Pattern Recognition, 1539-1547(2017).
[11] G. Gariepy, F. Tonolini, R. Henderson, J. Leach, D. Faccio. Detection and tracking of moving objects hidden from view. Nat. Photonics, 10, 23-26(2016).
[12] K. Morimoto, A. Ardelean, M.-L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, E. Charbon. Megapixel time-gated SPAD image sensor for 2D and 3D imaging applications. Optica, 7, 346-354(2020).
[13] V. Zickus, M.-L. Wu, K. Morimoto, V. Kapitany, A. Fatima, A. Turpin, R. Insall, J. Whitelaw, L. Machesky, C. Bruschini, D. Faccio, E. Charbon. Fluorescence lifetime imaging with a megapixel SPAD camera and neural network lifetime estimation. Sci. Rep., 10, 20986(2020).
[14] M. Buttafava, F. Villa, M. Castello, G. Tortarolo, E. Conca, M. Sanzaro, S. Piazza, P. Bianchini, A. Diaspro, F. Zappa, G. Vicidomini, A. Tosi. SPAD-based asynchronous-readout array detectors for image-scanning microscopy. Optica, 7, 755-765(2020).
[15] G. Zheng, R. Horstmeyer, C. Yang. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics, 7, 739-745(2013).
[16] P. C. Konda, L. Loetgering, K. C. Zhou, S. Xu, A. R. Harvey, R. Horstmeyer. Fourier ptychography: current applications and future promises. Opt. Express, 28, 9603-9630(2020).
[17] L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, L. Waller. Experimental robustness of Fourier ptychography phase retrieval algorithms. Opt. Express, 23, 33214-33240(2015).
[18] X. Ou, G. Zheng, C. Yang. Embedded pupil function recovery for Fourier ptychographic microscopy. Opt. Express, 22, 4960-4972(2014).
[19] S. Dong, K. Guo, P. Nanda, R. Shiradkar, G. Zheng. FPscope: a field-portable high-resolution microscope using a cellphone lens. Biomed. Opt. Express, 5, 3305-3310(2014).
[20] T. Aidukas, R. Eckert, A. R. Harvey, L. Waller, P. C. Konda. Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware. Sci. Rep., 9, 7457(2019).
[21] S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, G. Zheng. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Opt. Express, 22, 13586-13599(2014).
[22] J. Holloway, M. S. Asif, M. K. Sharma, N. Matsuda, R. Horstmeyer, O. Cossairt, A. Veeraraghavan. Toward long-distance subdiffraction imaging using coherent camera arrays. IEEE Trans. Comput. Imaging, 2, 251-265(2016).
[23] L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, L. Waller. Computational illumination for high-speed
[24] T. Aidukas, P. C. Konda, A. R. Harvey, M. J. Padgett, P.-A. Moreau. Phase and amplitude imaging with quantum correlations through Fourier ptychography. Sci. Rep., 9, 10445(2019).
[25] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, R. G. Baraniuk. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag., 25, 83-91(2008).
[26] U. S. Kamilov, A. Bourquard, A. Amini, M. Unser. One-bit measurements with adaptive thresholds. IEEE Signal Process. Lett., 19, 607-610(2012).
[27] R. A. Rojas, W. Luo, V. Murray, Y. M. Lu. Learning optimal parameters for binary sensing image reconstruction algorithms. IEEE International Conference on Image Processing (ICIP), 2791-2795(2017).
[28] Y. Mroueh, L. Rosasco. Quantization and greed are good: one bit phase retrieval, robustness and greedy refinements(2013).
[29] S. Mukherjee, C. S. Seelamantula. Phase retrieval from binary measurements. IEEE Signal Process. Lett., 25, 348-352(2018).
[30] J. Zhu, Q. Yuan, C. Song, Z. Xu. Phase retrieval from quantized measurements via approximate message passing. IEEE Signal Process. Lett., 26, 986-990(2019).
[31] A. M. S. Maallo, P. F. Almoro, S. G. Hanson. Quantization analysis of speckle intensity measurements for phase retrieval. Appl. Opt., 49, 5087-5094(2010).
[32] P. C. Konda, J. M. Taylor, A. R. Harvey. Multi-aperture Fourier ptychographic microscopy, theory and validation. Opt. Lasers Eng., 138, 106410(2020).
[33] P. E. Debevec, G. S. Owen, J. Malik, T. Whitted, B. Mones-Hattal. Recovering high dynamic range radiance maps from photographs. SIGGRAPH, 369-378(1997).
[34] S. Yang, H. Takajo. Quantization error reduction in the measurement of Fourier intensity for phase retrieval. Jpn. J. Appl. Phys., 43, 5747-5751(2004).
[35] A. Maiden, D. Johnson, P. Li. Further improvements to the ptychographical iterative engine. Optica, 4, 736-745(2017).
[36] A. M. Maiden, J. M. Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy, 109, 1256-1262(2009).
[37] S. Dong, Z. Bian, R. Shiradkar, G. Zheng. Sparsely sampled Fourier ptychography. Opt. Express, 22, 5455-5464(2014).
[38] http://www.cellimagelibrary.org/pages/datasets. http://www.cellimagelibrary.org/pages/datasets
[39] X. Yang. Dataset 1(2021).
[40] X. Yang. Quantized Fourier ptychography with binary images from SPAD cameras. Code 1(2021).
[41] Z. Bian, S. Dong, G. Zheng. Adaptive system correction for robust Fourier ptychographic imaging. Opt. Express, 21, 32400-32410(2013).
[42] A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, B. Yao. System calibration method for Fourier ptychographic microscopy. J. Biomed. Opt., 22, 096005(2017).
[43] E. R. Fossum, J. Ma, S. Masoodian, L. Anzagira, R. Zizza. The quanta image sensor: every photon counts. Sensors, 16, 1260(2016).
[44] X. Ren, P. W. Connolly, A. Halimi, Y. Altmann, S. McLaughlin, I. Gyongy, R. K. Henderson, G. S. Buller. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor. Opt. Express, 26, 5541-5557(2018).
L. Tian, L. Waller. 3D intensity and phase imaging from light field measurements in an LED array microscope. Optica, 2, 104-111(2015).
[45] . Canon successfully develops the world’s first 1-megapixel SPAD sensor.
[47] R. Horstmeyer, J. Chung, X. Ou, G. Zheng, C. Yang. Diffraction tomography with Fourier ptychography. Optica, 3, 827-835(2016).
[48] K. C. Zhou, R. Horstmeyer. Diffraction tomography with a deep image prior. Opt. Express, 28, 12872-12896(2020).
[49] C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, P. T. C. So. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy. Opt. Express, 23, 26999-27010(2015).
[50] R. Eckert, Z. F. Phillips, L. Waller. Efficient illumination angle self-calibration in Fourier ptychography. Appl. Opt., 57, 5434-5442(2018).
[51] J. Chung, H. Lu, X. Ou, H. Zhou, C. Yang. Wide-field Fourier ptychographic microscopy using laser illumination source. Biomed. Opt. Express, 7, 4787-4802(2016).
Set citation alerts for the article
Please enter your email address