Abstract
1. INTRODUCTION
Optical remote imaging is one of the important means and technical supplement of remote sensing. Improving the resolution of space cameras has been unremittingly explored in the field of high-resolution remote sensing. The angular resolution is defined as , where is the center wavelength, and denotes the size of the imaging aperture [1]. According to the formula, increasing the aperture of a space camera, especially the main mirror, is the most direct way to improve spatial resolution and extend the field of view (FOV). However, it imposes geometric aberrations to the optical system; thus more optical surfaces are required to optimize the aberrations in turn, which brings a series of problems such as a larger volume of space cameras and increased cost and launch risk [2]. Therefore, the aperture of a single mirror in space cameras based on the traditional incoherent imaging system cannot be increased infinitely and is also limited by manufacturing techniques.
A passive optical synthetic aperture camera can bypass the manufacturing of large-aperture mirrors, but it must ensure the confocal and cophase between sub-apertures. For example, the six sub-apertures in Golay6 must achieve strict confocal and cophase with an accuracy of up to to meet the imaging requirements [3,4], which requires extremely high performance of phase detection and posture control. Moreover, the stability of the platform is demanding, which makes it tough to be widely applied in engineering.
Synthetic aperture radar (SAR) is an active high-resolution radar technology that can directly measure the complex amplitude wave field of sub-apertures through an antenna with a temporal resolution of picoseconds, and then stitch this sub-aperture information in the frequency domain to obtain a virtual large aperture with high resolution [5]. Efforts have been made to extend the concept of synthetic apertures to the near-infrared band [6–8]. According to the spatial resolution formula, the imaging resolution can be further improved if realizing synthetic apertures in the visible range of light. However, the optical band is four to five orders of magnitude higher than the microwave frequency. Compared with antennas, an optical detector needs to record the complex amplitude wave field at the level of femtoseconds, which is far beyond the ability of modern imaging devices.
Sign up for Photonics Research TOC. Get the latest issue of Photonics Research delivered right to you!Sign up now
Fourier ptychographic microscopy (FPM) is a promising computational imaging technique invented by Zheng
In 2014, Dong
In this paper, we report an FP imaging scheme for long-range and larger FOV reflective imaging by employing a divergent spherical optical wave for illumination and camera scanning. The divergent spherical wave in our scheme can increase the FOV of illumination and break the limitation of FOV with the convergent optical wave. We provided rigorous theoretical analysis to explain the generation of speckles, which differs from traditional FPM implementations with quasi-plane wave illumination. In our reflective FP scheme, the random phase of the object, coming from its microscopic rough surface variation, will be mixed with a spherical wave so that the captured images are manifested as speckles. As both resolution and speckle size are inversely proportional to aperture size, we demonstrated that by creating a synthetic aperture, our FP imaging scheme is able to reduce speckle size and improve resolution. We analyzed the limit resolution of coherent imaging with speckles based on the Rayleigh criterion, and quantitatively validated the conclusion on our experimental platform, which can be vital for predicting and evaluating the performance of practical coherent imaging systems. To further remove speckles in reconstruction, a despeckle algorithm was presented based on the negative exponential distribution, and we realized PNSR values up to around 25–30 dB in simulations. We experimentally performed a 10 m stand-off FP imaging over an object of size with a synthetic aperture of 31 mm. The imaging distance and FOV have increased by orders of magnitude compared with the SAVI method. It is noted that our proposed method has the capability for further scaling to a longer range.
2. MATERIALS AND METHODS
The configuration of our proposed FP imaging system is shown in Fig. 1(a). A divergent spherical wavefront produced by a laser source illuminates the diffuse object, and the reflected light from the object is then captured by a camera. The laser is moved to illuminate the object from different angles, and a sequence of raw images is recorded accordingly as shown in Fig. 1(b). The raw captured image is blurred and degraded due to the influence of large speckles. An image with a synthetic aperture can be reconstructed from these raw images via FP reconstruction. By creating a synthetic aperture, the FP reconstructed image greatly reduces the speckle size and improves the resolution as shown in Fig. 1(c). A speckle denoising algorithm is further operated to improve the quality of the reconstruction as shown in Fig. 1(d).
Figure 1.(a) Proposed scheme. The object is illuminated by a divergent laser beam to increase the FOV. The scattering from the object is recorded by the sensor via an imaging lens. As the numerical aperture of the imaging system is fixed, a limited resolution image is obtained on the sensor plane. (b) FP raw images. By shifting the laser source with an
Figure 2.Comparisons between two typical remote images via FP. (a), (b) Camera scanning and its FOV reduction. (c), (d) Laser scanning and its FOV reduction.
Figure 3.Comparisons among three kinds of illumination schemes for remote imaging via FP. (a) Convergent light illumination. (b) Quasi-plane wave illumination. (c) Divergent light illumination. For simplicity, the coordinate of the object’s transverse plane is
Figure 4.Proposed despeckle algorithm with flow chart (on the left) and despeckle simulation (on the right). (b) The speckle image is simulated from the (a) ground truth based on the negative exponential distribution. (d) The proposed method presents more favorable result than the (c) BM3D algorithm in terms of both the quantitative SSIM and PSNR metrics and the visual quality.
A. Comparison of Different Illumination Schemes
There are generally two ways to realize relative spectrum shifting in FP imaging. The camera scanning scheme [Fig. 2(a)] is based on the theory that far-field diffraction corresponds to Fourier transform mathematically. Unfortunately, in order to achieve far-field diffraction, the size of the camera lens should be extremely small. For a 10 m distance imaging scenario, the maximum diameter of the camera lens is only 1.6 mm, which is much smaller than what we commonly use. In addition, the scanning of the camera produces a varying FOV and causes a limited FOV in the effective area, as shown in Fig. 2(b).
Laser scanning is another way widely employed in FP imaging, which can basically be classified into three types, as shown in Fig. 3. The illumination of convergent light [Fig. 3(a)] has been utilized in SAVI, and quasi-plane wave illumination [Fig. 3(b)] is quite common in the field of FPM. The two schemes also suffer from a limited FOV as convergence or collimating mirrors are used in the design of the optical path. Meanwhile, the shifting light source illuminates the object from different angles, which implies that the light wave should be generated to cover the whole object’s area. Therefore, they are quite impractical for the case of macroscopic imaging with long distances (e.g., in remote sensing). For the illumination of divergent light, as shown in Fig. 3(c), imaging FOV and distance can both be greatly increased since no extra mirrors are required. The eventual imaging distance is only related to energy intensity and environmental disturbance. This illumination pattern is what we adopted, and the corresponding forward model will be established in the following section, Section 2.B.
B. FP Forward Model and Reconstruction
As shown in Fig. 3(c), the laser source provides a divergent beam to illuminate the object. We can model the wavefront in front of the object as a spherical wave , which is given by
Using phase retrieval algorithms, FP allows the recovery of missing phase with the constraint from the overlaps between the sub-aperture spectrum. Although extensive works have been contributed to solving the FP phase retrieval problem [13,27–30], the alternating projection (AP) method is still the stable and widely used option. The concept of AP originated from the Gerchberg–Saxton (GS) algorithm, where the magnitude constraints from the measured images are imposed to the field estimate in the image domain.
In the th iteration, the current estimates of the object and the pupil function are denoted as and . For the th image , the sub-spectrum estimate is inversely Fourier transformed to form the estimate of the field, denoted as . We impose the magnitude constraint to as
C. Speckle Denoising Algorithm
Speckle is not real noise in a conventional sense. Since the random phase distorts the intensity field, the seeming randomness of the speckle intensity manifests as “noise.” The probability of the speckle intensity follows the negative exponential distribution [31] , where is the uncorrupted intensity represented as the mean intensity. It is straightforward to establish a multiplicative noise model , where is the multiplicative speckle noise and satisfies . By taking the logarithm of the multiplicative noise model, it is transformed into the additive noise model
Here, we propose a novel speckle denoising algorithm; refer to Appendix B for the specific deduction. The flowchart of the algorithm is shown in Fig. 4. We first take the logarithm of the speckle image as . The logarithmic speckle noise is estimated as , where is the BM3D denoising operation [32]. is then transformed according to , where is the transformation function defined by when and when . The new noisy image is constructed by . We can obtain the denoised logarithmic image as . Finally, FP reconstruction is operated and a denoised reconstructed image can be obtained.
Figures 4(a)–4(d) show the simulation result of our speckle denoising algorithm and its comparison with the BM3D algorithm. A cameraman picture is chosen as the ground truth, and the speckles are produced based on the negative exponential distribution. We select the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as the criteria to evaluate despeckling performance. It can be seen that our algorithm provides much cleaner background information, presenting a more powerful ability to eliminate the effect of speckle.
3. RESULTS
A. Simulations
First, we conducted simulations to visualize and explain FP reconstruction of the object, where a resolution target is used as the amplitude, and three kinds of phase terms are considered. When illuminated by plane waves, the object contains no phase term. As shown in Fig. 5(a-1), the spectrum follows a nicely structured pattern with a peak at the DC component and decaying magnitudes for high spatial frequency. The raw images contain brightfield images and darkfield images, corresponding to the DC component and high frequency, respectively. Stitching up the spectrum from these raw images can finally produce a high-resolution image. The quadratic phase term is the case where a spherical wave is used to illuminate transmissive objects. One obvious difference with the no-phase case is that each raw image contains a brightfield part and a darkfield part, as shown in Fig. 5(b-2), since the quadratic phase term introduces optical waves with higher incident angles exceeding the numerical aperture. Also, the FOV expands along with the improved resolution [Fig. 5(b-3)].
Figure 5.Comparison of FP-based synthetic aperture imaging with different phase terms. The resolution target is used as the amplitude. (a) Results when the phase term is zero. (b) Results when the phase term is quadratic phase. (c) Results when the phase term is a mixture of quadratic phase and random phase. (a-1)–(c-1) Fourier spectrum. (a-2)–(c-2) Raw image with small aperture. (a-3)–(c-3) Synthetic aperture.
Figure 5(c) shows our case where the phase term is the mixture of random phase and quadratic phase. As seen from Eq. (5), the phase of dummy objects contains two parts: quadratic phase from the optical path and imaging lens and random phase of the rough surface. The spectrum does not exhibit any meaningful structure since the random phase dominates the quadratic phase. As shown in Fig. 5(c-2), the raw images of the dummy object are distributed with speckles, which degrades the image quality. By creating a large synthetic aperture, we can reduce the speckle size and recover the high-resolution image of the dummy object as shown in Fig. 5(c-3). In conclusion, FP reconstruction enables high-fidelity recovery of phase information, even when the phase fluctuates significantly as random phase. Although the amplitude is of the most concern in coherent imaging, the phase term will have a profound impact on the image when the aperture is limited.
B. Feasibility and Performance
We performed a 10 m standoff-distance experiment to demonstrate the performance of our scheme. The imaging distance is around 10 times that of the state-of-the-art method in SAVI [21]. The experimental setup is shown in Fig. 6(a). The divergent beam is produced using a 532 nm single-mode laser source and a plano–concave lens (focal length of ). We employ a lens with 75 mm focal length and 5 mm aperture for imaging, and then record the raw images with an image sensor (IMX178, , 2.4 μm pixel pitch). In addition, a linear polarizer is placed before the camera to filter out the noninterference light. We selected a large landscape painting as the object, and part of the painting () is imaged by our system. A 2D translation stage (Zolix, PSA050-11-X) is used to shift the laser source with the step size of 0.875 mm, resulting in an overlapping rate of 82.5%. Here, the camera is facing the object directly, and the scanning plane of the light source is perpendicular to the camera’s normal direction. A grid of low-resolution images is collected, and the maximum synthetic aperture is 31 mm, which is six times larger than the lens’ aperture.
Figure 6.(a) Experimental setup: 10 m standoff-distance super-resolution coherent imaging over the landscape painting. The target and the imaging setup are shown in (a-1) and (a-2), respectively. Note that the experiment is performed in the dark environment at night with the light off. The imaging area is about
A single capture from the imaging system is shown in Fig. 6(b). Due to the limitation of the aperture, the raw image exhibits significant blur and diffraction. As the surface of the painting is rough, severe speckles can be observed, which further degrade the image quality. As seen in Fig. 6(c), the reconstructed image can provide much higher resolution and the size of speckles is decreased compared with the raw data. For example, we can clearly observe the mild structure on the building in Fig. 6(c1) and leaf patterns in Fig. 6(c2), which are not resolvable in the raw captures. After applying the proposed speckle denoising algorithm, the speckles are smoothed and removed, resulting in better visual quality as shown in Fig. 6(d).
Next, we performed the experiment on a self-designed resolution target () using the same experimental setup. The target contains several types of commonly used resolution charts as shown in Fig. 7(a). A grid of low-resolution images is collected, and the maximum synthetic aperture is 27.75 mm. The raw image captured by the imaging system is shown in Fig. 7(b), which is distributed with large speckles. After the FP reconstruction, the image resolution is greatly improved with a smaller size of speckles. The denoising algorithm effectively removes the effect of speckles, improving the image contrast while retaining the same resolution improvement as traditional FP algorithms. The results presented above are based on a qualitative analysis of resolution; we will further provide the related quantitative performance in the consequent section.
Figure 7.10 m standoff-distance super-resolution coherent imaging over the resolution target. (a) Ground truth of self-designed resolution target containing different resolution charts, where the bottom right target will further be used for qualitative analysis of resolution in Fig.
C. Resolution Analysis of Coherent Imaging with Speckles
The Rayleigh criterion has been widely used for resolution estimation in incoherent imaging. Two incoherent point sources apart with the Rayleigh limit can just be resolved. It becomes complex for coherent imaging as the interference intensity is phase dependent although the complex field is linear [33,34]. In this case, two coherent point sources with the same phase will not be resolved. However, they become fully resolved when the two sources have a phase difference of .
For reflective coherent imaging, the random phase of a rough surface will produce speckles, which degrade image quality and resolution. A conservative way to define the resolution limit is to consider the worst case. Considering two point sources with the same phase, it is easy to obtain the profile of the superimposed intensity. Theoretically, it can be calculated from the superimposed image that when the central dip’s intensity is 81% of maximum intensity on either side, the angular distance of two points is (see Appendix A). Thus, the Rayleigh limit for coherent imaging under speckle is defined as
To investigate the resolution performance of our method, we imaged a negative USAF target with white paint sprayed to the chrome surface, as in the work of SAVI [21]. The imaging distance is reduced to 1 m due to the small size of the target (). The target is imaged through the back of a glass plate to retain the high-resolution features. The laser source is translated with the stepsize of 1 mm, resulting in an overlapping rate of 80%. A grid of low-resolution images is collected, and the maximum synthetic aperture is 35 mm.
The reconstruction results with SA of 10 mm, 18 mm, and 35 mm are presented in Fig. 8(a). It can be seen that an increased synthetic aperture leads to higher resolution and smaller speckle size. We then process the reconstructed results with the speckle denoising algorithm. The imaging quality further improves with the removal of speckle patterns. Figure 8(b) shows the close-up reconstructed images of four typical patterns in the target. The blue dashed line demarcates resolvable features, and the features below this line can be clearly identified. The contrast metric is selected as the criterion to evaluate the resolution performance, which is given by
Figure 8.USAF resolution target is used to characterize the resolution under different synthetic apertures with FP. (a) Raw image and examples of reconstructed images of SA 10 mm, SA 18 mm, and SA 35 mm. (b) Magnified regions of various bar groups in (a). The blue dashed line demarcates resolvable features. Features below this line are resolvable. (c) Contrast plots for the reconstructed images with
Moreover, we set a contrast threshold as 0.1 to determine the limit resolution of the reconstructed images, since the value of the contrast metric is around 0.1 for the Rayleigh criteria (). The minimum resolvable line width of various SA is then determined, and marked by red stars in Fig. 8(d). We can see that the experimental resolution limit roughly agrees with the curve of theoretical values.
Using the resolution bars from the right bottom of Fig. 7, we quantitatively demonstrate the resolution performance of our scheme, and the results are shown in Fig. 9. The theoretical minimum resolvable line width is 1.7 mm. As can be seen in the raw image of Fig. 9(b), the bars can be resolvable when the line width is 2 mm. FP reconstruction improves the resolution to 0.357 mm line width, which is at least 4.7 times smaller compared with the raw image.
Figure 9.Resolution analysis of 10 m super-resolution coherent imaging. (a) Zoomed images of resolution bars from the right bottom of Figs.
D. Comparison of Different Illumination Schemes
We experimentally duplicated a 1 m standoff-distance SAVI setup to validate the FOV expansion of our method compared with the camera scanning scheme of SAVI. A convex lens (diameter: 2 inches, focal length: 300 mm) is inserted between the laser source and the target to compensate for the quadratic phase. As illustrated in Fig. 3(a), the illumination is converged after the lens, and thus a light spot will appear on the target plane. The spot size has been confined by the practical size of the lens, which leads to the limited FOV of SAVI. The camera is mounted on the translation stage and moved to realize spectrum shifting. The distance between adjacent positions of the camera is 0.5 mm to ensure an overlapping rate of 78%. A grid of images is captured to produce a synthetic aperture of 14.84 mm. The experimental results of SAVI are shown in Figs. 10(d)–10(f). To make a fair comparison, we keep the configuration of the setup exactly the same as SAVI. We only replace the compensation lens in SAVI with a plano–concave lens (diameter: 6 mm, focal length: ) to produce a divergent beam. The experimental results of our method are shown in Figs. 10(a)–10(c). As we can see, our method demonstrates a significantly improved FOV (around six times) compared with SAVI.
Figure 10.Comparison of the SAVI and the proposed scheme. To make a fair comparison, similar parameters are used for the setups of both schemes. The camera with 2.34 mm aperture is placed 1 m away from the object. A grid of
4. DISCUSSION AND CONCLUSION
We demonstrated a reflective long-range FP imaging scheme, which allows synthetic aperture imaging with large FOV. Existing methods, which image diffuse reflective objects with optically rough surfaces, lack an underlying physical explanation for principles. We have established a forward model with rigorous deduction to prove that FP can be used for diffuse reflective objects, rather than simply suppressing speckle. Our 10 m standoff-distance experiment realized a theoretical synthetic aperture with FOV. The imaging distance and FOV have increased by orders of magnitude compared with SAVI method. We performed the proposed speckle denoising algorithm on simulated data, and the PSNR value can be improved up to around 25–30 dB. In addition, we analyzed the limit resolution of coherent imaging with speckles based on the Rayleigh criterion, and quantitatively validated the conclusion on our experimental platform, which can be vital for predicting and evaluating the performance of practical coherent imaging systems (e.g., laser imaging, laser display).
An important issue following naturally would be how our scheme can be extended to more complex application scenarios. When imaging at a much longer distance (like 1 km), one main challenge for our scheme lies in the atmospheric turbulence [35], which might introduce unstable and fluctuated distortion to illumination and reflection wavefronts. The distortion on an illumination wavefront should present a limited effect since it can be overwhelmed by the random phase as the forward model discussed in Section 2.B. The distortion on the reflection wavefront, on the other hand, can be problematic for long-range imaging like astronomy imaging. Actually, the turbulence can be regarded as the wavefront error in the pupil function [36]. As FP reconstruction allows correction of aberrations, it becomes possible to reconstruct the pupil function with a turbulent wavefront. To tackle the space-variant wavefront distortion, FP reconstruction can be performed targeted at small image patches where the distortion can be approximated to be space-invariant. In addition, it can be feasible to minimize the capture time to freeze the distorted wavefront by using the camera array or laser array, that is, to reduce the impact of atmospheric disturbance by improving data collection efficiency. The effect of stray light might be another problem to be solved, which is well worthy of study and discussion in future work. Possible solutions include filtering out light with certain wavelengths or using a lens hood to suppress stray light when necessary.
APPENDIX A: DERIVATION OF RESOLUTION LIMIT FOR COHERENT IMAGING WITH SPECKLE
In coherent scenarios, a single point A passing through a circular aperture with diameter will produce the coherent point spread function as follows [
Another point B with angular displacement relative to point A will generate a new coherent point spread function. And the coherent PSF of point B will interfere with that of point A. Then, the intensity of the two points’ interference field is given by
Figure 11.Superimposed intensity of two coherent points in the case of phase differences
Figure 12.Plot of central-to-peak ratio versus angular displacement
Figure 13.Simulated speckle image of double splits under various angular displacements between two splits. (a)
APPENDIX B: SPECKLE DENOISING ALGORITHM
According to literature [
We design a transformation function as follows:
Figure 14.Gaussian fitting for the probability distribution of
After the estimation of the logarithm noise , it is transformed into . Then, by adding to , we construct a new noisy image:
References
[1] J. W. Goodman. Introduction to Fourier Optics(2005).
[31] J. W. Goodman. Speckle Phenomena in Optics: Theory and Applications(2007).
[35] M. C. Roggemann, B. M. Welsh, B. R. Hunt. Imaging through Turbulence(1996).
Set citation alerts for the article
Please enter your email address