Abstract
Prior-free imaging beyond the memory effect (ME) is critical to seeing through the scattering media. However, methods proposed to exceed the ME range have suffered from the availability of prior information of imaging targets. Here, we propose a blind target position detection for large field-of-view scattering imaging. Only exploiting two captured multi-target near-field speckles at different imaging distances, the unknown number and locations of the isolated imaging targets are blindly reconstructed via the proposed scaling-vector-based detection. Autocorrelations can be calculated for the speckle regions centered by the derived positions via low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the complete scene of the multiple targets exceeding the ME range can be reconstructed without any prior information. The effectiveness of the proposed algorithm is verified by testing on a real scattering imaging system.1. INTRODUCTION
Scattering media widely exist in our daily life, which possess a non-uniform reflective and refractive index distribution. It disturbs the light rays coming from the imaging targets, which hinders the direct analysis of object information behind it using traditional optical systems. To overcome this challenge, nowadays, novel methods based on the memory effect (ME) [1–4] achieve non-invasive scattering imaging via speckle correlation [5–7]. Compared with other scattering imaging methods, such as the ballistic-light-based approach [8–13], wavefront shaping [14,15], and transmission matrix measurement [16–21], the speckle correlation technique realizes prior-free imaging only by traditional instruments and has the capability of quick imaging in currently inaccessible scenarios. However, the field-of-view (FOV) of this method is limited by the ME range. Prior-free imaging beyond the ME range is critical to seeing through scattering media.
To exceed the ME range, some techniques have been proposed by introducing the prior information of isolated imaging targets. Li et al. introduced the position prior of each imaging target during the point spread function (PSF) calibration [22]; Sahoo et al. used the wavelength prior of each imaging target during the PSF calibration [23]; and Guo et al. exploited the shape prior or the PSF closing to the imaging target to exceed the ME range [24]. Wang et al. proposed a dual-target non-invasive scattering imaging method via Fourier spectrum guessing and iterative energy constrained compensation [25]. However, this method can only be used for dual-target separation and the number of targets serves as the known prior information before reconstruction. Boniface et al. achieved non-invasive target localization beyond the ME by analyzing the speckle envelope of each target [26], but this localization method was not available when an unknown number of targets were illuminated simultaneously.
Here, we put forward a multi-target large FOV scattering imaging method based on the blind target position detection. It blindly detects the unknown number and positions of the isolated targets only using two multi-target near-field speckles captured at different imaging distances. The theoretical scaling relationship between two speckles is derived and demonstrated and the scaling centers correspond to the positions of imaging targets. Based on the theoretical derivation, scaling-vector-based target position detection is proposed, which recognizes the target position using the length and direction information of scaling vectors. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm [27] to select the optimal recovery with no prior information, especially for the autocorrelation with interference, the complete scene of multiple isolated targets exceeding the ME range is reconstructed. Experiments on a real imaging system demonstrate the effectiveness of the proposed algorithm in multi-target blind reconstruction through scattering media. Visually distinguishable reconstructions are experimentally achieved with the whole scene of multiple isolated targets exceeding the ME range. In principle, the exceeding times can be increased as long as the acquisition equipment can fully collect the scattered speckle in larger FOV. Further, we verified the accuracy for multi-target positioning, capability for multi-target blind reconstruction, and the experimental analysis about limitations and restraints of the proposed method in the end.
Sign up for Photonics Research TOC. Get the latest issue of Photonics Research delivered right to you!Sign up now
2. PRINCIPLE
The principle of the method is depicted in Fig. 1. We excite multiple isolated targets, , hidden behind a scattering medium, with a spatially incoherent illumination. Each imaging target falls within the ME range, while the spacing between any two targets is beyond. The large FOV speckle image at Distance 1, , which consists of speckles produced by each target, , is captured by a 2D camera array. Certainly, the proposed imaging system is applied for the situation that the number and locations of the imaging targets cannot be seen directly from the captured speckle. This position information is hidden in . To extract it, a second speckle is captured at another imaging distance, Distance 2. The ME range, , corresponds to the angle FOV of the diffuser, within which the points on the object plane produce random speckles with high correlation. For the spatially incoherent imaging system via speckle correlation, its ME range is constrained by [5], where denotes the wavelength of the spatially incoherent light source and is the effective thickness of the scattering media. Since the speckle patterns generated within the ME range are translational invariant, the large FOV speckle of multiple targets recorded by the camera array at Distance 1, in Fig. 1 can be formulated by (so as for ) where denotes the convolution operation; denotes the total number of the imaging targets and are all the imaging targets; represents the translational invariant point spread function at Distance 1 corresponding to the point where is located in the object plane; and denotes the speckle pattern generated by with imaging distance . The translational invariance of the PSF can be further expressed by the envelope of each PSF, which is normally removed during the image processing and seen as the obstacle for reconstruction [5]. It varies with the distance away from the corresponding point of each PSF. Especially for multi-target scattering imaging, multiple envelopes couple the multi-target information with the hidden target position information, which increases the difficulty for imaging exceeding the ME. The intensity distribution of each PSF can be divided into two parts by where is 2D coordinates in the sensor plane; is 2D coordinates in the object plane, and is the location of in the object plane. denotes the envelope property (or energy distribution of the PSF), which has high value if is close to with relatively low value if is far away from [1] and shows the same property for the PSF produced by a point-like source or the speckle generated by a small size object. denotes the system response at Distance 1 corresponding to after removing the envelope and the autocorrelation of equals a sharp peak function [5]. Substituting Eq. (2) into Eq. (1), we have
Figure 1.Schematic of our multi-target large FOV scattering imaging system via the blind target position detection. Multiple isolated targets, , behind the diffuser form a large FOV scene.
From Eq. (3), the number, the shapes, and the locations of the imaging targets are coupled together in the captured speckle. This is the problem with an infinite number of solutions when reconstructing all the target shapes only via , since there are so many unknown variables in Eq. (3). One effective way for simplifying this problem and reducing the solution space is to detect the total number of the objects and the location of each object without any prior information. Thus, the blind target position detection algorithm is proposed and introduced in the following.
We exploit a second near-field speckle captured at another imaging distance for position detection. The theoretical scaling relationship between the two near-field speckles derived by us indicates that some areas (called scaling centers) in the near-field speckles do not scale with the imaging distance and these scaling-invariant areas correspond to the locations of imaging targets. To the best of our knowledge, no work has been proposed to investigate the relationship between two near-field speckles at different imaging distances. Inspired by the existing statistical analysis of the far-field speckles in 3D space [18,28,29], and the novel technique using speckle correlation to improve axial sectioning [30], without loss of generality, we first analyze the scaling relationship between two near-field on-axis PSFs. With an ideal pinhole at the optical axis in the object plane as the corresponding point where one imaging target (supposed as ) is located in the object plane, using the Fresnel diffraction formula, the field on the front surface of the scattering media, , is expressed by where we assume that the on-axis pinhole is located at ; is the 2D coordinates corresponding to the diffuser plane. Here, the scattering medium regarded as an unknown random 2D phase disturbance, , is introduced into the propagation model. Using the near-field Huygens diffraction theory, the light field on the first sensor plane, , can be expressed by where equals . The scattered PSF shown in Eq. (2) at Distance 1 plane, (so as for at Distance 2 plane), is the square of the magnitude of as
To describe the scaling relationship, we introduce the correlation function between on-axis and as where is a constant value which represents that the spatial coordinates of are scaled to . In theory, the optimal value corresponds to the peak of the correlation function, , where and the scaled are the most relevant. However, based on the above formula, multiple variables are coupled to each other in near-field PSFs and we cannot derive the analytical solution of with the peak of the correlation function between and . In this case, we simulated the PSFs by computer via Eq. (6) and search the most relevant areas by block matching to explore the statistical optimal solution of .
The simulated PSFs are conducted as shown in Fig. 2, where the scattering process is created from a random phase mask based on the projection model [31,32]. A point light source was set at the on-axis position in the object plane as the corresponding point where is located in the object plane. The simulated PSFs (normalized by removing the envelope) at different imaging distances through the modeled scattered layer are shown in Figs. 2(a) and 2(b). Visually, the intensity distribution of the two PSFs is strongly correlated, such as the green rectangles. Inspired by the wavefront slopes used from adaptive optics [33,34], we introduce the scaling vector into the algorithm to explore the detailed relationship between two imaging-distance PSFs, which is estimated by discrete block matching. We traverse to search the optimal area most relevant to a selected block in , and the translational relation from the selected block in to the optimal block in is equivalent to the scaling vector from the center point of the selected block in to . The block matching strategy can be expressed as where denotes the selected block with as the center in ; denotes the searching block with as the center in to match ; and the matched block in is centered around . calculates the correlation between two pixel blocks and we use the cross correlation in this paper [5]. In this way, a scaling vector is defined as an arrow with direction from in to in . To balance the computational complexity, we just utilized part of the discrete scaling vectors built by block matching, as shown in Fig. 2(c). The space between any two vectors vertically or horizontally is 20 pixels.
Figure 2.Simulated experiments to analyze the relationship between two PSFs at different imaging distances (, , pixels). The point light source was set at the optical axis (), as the corresponding point where located in the object plane. (a) Normalized with . (b) Normalized with . (c) The estimated low-density scaling vectors based on (a) and (b). The space between any two vectors vertically or horizontally is 20 pixels. The green rectangle in (b) is the matched block of the green rectangle in (a) and the enlarged arrow in (c) represents the estimated scaling vector corresponding to these two green rectangles. The blue point in (c) is the location of the light source. (d) The histogram distribution of values extracted from all the scaling vectors in (c). Scale bar, 50 camera pixels.
Then we calculate the value of each estimated scaling vector in Fig. 2(c), which equals the distance between and the scaling center divided by the distance between and the scaling center, as shown in Eq. (7). Theoretically, all the scaling vectors which were estimated by block matching with maximum cross correlation share the same value and the statistical analysis for Fig. 2(c) bears this out. The histogram distribution of values as shown in Fig. 2(d) demonstrates that stabilizes around 1.015 as a constant value, which meets the description of Eq. (7). Although the analytical solution of Eq. (7) is hard to derive, the scaling relationship between near-field PSFs statistical analysis does exist, which was proved by the above statistical analysis. We tentatively conclude that the on-axis near-field scaling relationship between at Distance 1 plane and at Distance 2 plane can be expressed as where denotes that is most relevant to the coordinate-scaled with the constant coordinate-scaled value . Also, the on-axis scaling relationship in Eq. (9) can be generalized to the off-axis case as and the corresponding point where is located changes from () to (). As described in Eq. (10), for the off-axis scaling relationship situation, the scaling vector that starts at in will end at in . This reveals that the scaling vectors have two features representing the location of , which is also regarded as the scaling center from Distance 1 to Distance 2. First, the length of the scaling vector is proportional to the distance between the scaling vector and the scaling center. Second, the scaling center falls on the defined line by each scaling vector and each scaling vector points away from the scaling center (if ).
Under the spatially incoherent illumination, the speckle generated by one certain imaging target (supposed as ) is the linear superposition of many highly correlated PSFs. Actually, each PSF possesses a different scaling center corresponding to one point belonging to the imaging target. Considering the schematic of the proposed scattering imaging system in Fig. 1, the size of each imaging target falls within the ME range but the spacing between any two targets beyond. All the PSFs that form the single-target speckle possess the same scaling center approximately where the imaging target located around. Equation (11) explains the scaling relationship between the single-target near-field speckles as where represents the scaling center of these two speckles which equals the location of ; denotes the 2D coordinate differences between the scaling vector and the scaling center. Theoretically, the mentioned feathers between scaling vectors and scaling centers for PSFs still apply to the estimated scaling vectors by near-field speckles from to .
After that, the scaling-vector-based detection algorithm, as shown in Fig. 3, is proposed based on Eq. (11) for multiple targets in Fig. 1 under the spatially incoherent light source. First, the scaling vectors are estimated by block matching as Eq. (8) from two imaging-distance multi-target speckles. We traverse to search the optimal area most relevant to a selected block in . The density of estimated scaling vectors vertically or horizontally can be adjusted appropriately with the speckle resolution. Then, multiple targets mean that multiple scaling centers exist around the estimated scaling vectors. In the case that there is an uncertain number of scaling centers, we use the length information of each estimated scaling vector to determine some regions where the targets may locate. Any position whose scaling vector length is below a certain threshold is listed as the possible region. Next, the connected component analysis (8-connected) [35] will be applied for clustering these regions and the number of the connected components equals the number of imaging targets. The algorithm will choose the area with the minimum length of the scaling vector in each connected component as the rough location of each target. Finally, the direction information of the scaling vector would be used for each rough location to adjust the positioned row and column where the imaging target located accurately, because the line defined by each scaling vector belonging to one connected component theoretically passes through the target position in that component. The proposed scaling-vector-based detection algorithm is assisted with the block matching method and the connected component analysis to achieve blind target position detection, only exploiting two captured multi-target near-field speckles at different imaging distances. The detected position information includes the number, , and the locations of the imaging targets corresponding to the object plane, , like the blue points in the Section 3.
Figure 3.The block diagram of the scaling-vector-based detection algorithm.
After target position detection, the low-cross-talk region allocation strategy is proposed to extract the autocorrelation of each target, in order to simplify the infinite-solution problem in Eq. (3). We can select a small square region of the captured speckle, , in for autocorrelation calculation, which is centered by one detected target location ( as an example) with side length as
Considering the envelope properties of speckle, the selected region affects the weight of each imaging target in by
This weight difference would be doubled if is transferred into the autocorrelation domain [25] as where denotes the selected square region in belonging to ; is spatially constant when is much smaller than the whole speckle to describe the remained weight for caused by the envelope. Via the doubled weight difference, the autocorrelation of can be extracted from the autocorrelation of the selected region in centered by the location of . Repeat the above steps times for each target location and the autocorrelation of each imaging target will be reconstructed with low cross talk by other autocorrelations as shown in Eq. (14).
However, the extracted autocorrelations of each imaging target do carry some interference from other autocorrelations in theory, which leads to an unstable output after the traditional phase retrieval algorithm with the random phase as the initial input. In order to improve the stability of reconstruction, the modified phase retrieval algorithm is applied in the paper especially for the autocorrelation signal with interference. First, we blindly reconstruct a number of object images by the “hybrid input-output” and the “error-reduction” algorithms via different random initial phases [27]. Then, for each object image, the part whose intensity is less than 20% of the maximum intensity of that image is regarded as unstable noise and the intensity of that part is set to zero. Finally, the object image with the least change in the autocorrelation domain between the unprocessed one and the processed one is taken as the final optimal output.
Working with the modified phase retrieval algorithm, each target can be reconstructed with the help of the detected number and locations of the imaging targets. Then, these targets will be placed at the detected position to form a complete scene of the multiple targets exceeding the ME range without any prior information, eventually. It has been verified by the real experiments that the approximations that appear in this section are acceptable and the extracted position information is sufficient for a visually distinguishable reconstruction. In addition, we discuss the limitations of the proposed algorithm in the next section.
3. EXPERIMENTAL DEMONSTRATION
In the following section, we first describe the optical setup of the scattering imaging system, then detail the tests on a real scattering imaging system, and finally analyze the limitations of the proposed algorithm.
A. Experimental Setup
The multi-target large FOV scattering imaging system setup via the blind target position detection is shown in Fig. 4, which is extended from the single-shot scattering system proposed by Katz et al. [5]. A narrow bandwidth 532 nm single-frequency CW laser (Cobolt SambaTM-100) serves as the light source whose coherence is attenuated via rotating ground glass. One Thorlabs Optics 220-grit diffuser whose effective ME range is 16.6 mrad is placed between multiple targets and the sensor plane [25]. We used a single CMOS camera (Filr, , pixels) on a 3D moving platform (DHC, minimum ) to capture the large FOV speckles at different imaging distances.
Figure 4.Multi-target large FOV scattering imaging system setup via the blind target position detection.
B. Tests on a Real Scattering Imaging System
First, various multiple targets were tested to demonstrate the effectiveness of the proposed algorithm. The left column shows the test on the mask “2FL” and the right column shows the test on a larger and more complex scene “01234.” Figures 5(a) and 5(f) describe the detailed parameters of multiple targets. The minimum distance between any two targets is 3.5 mm, and each target size is within 0.5 mm. In this part, the object distance () equals 120 mm. The total scene size is much larger than the ME range , but each target size is smaller than it. The real multi-target large FOV near-field speckles through scattering media at different imaging distances are captured via a 3D moving camera. Figures 5(c) and 5(h) are captured with and Figs. 5(d) and 5(i) with . Visually, no prior information (including the number and locations of the imaging targets) can be seen directly from these multi-target superimposed speckles. The red arrows in Fig. 5(e) show all the estimated scaling vectors using the block matching method based on the near-field speckles for mask “2FL,” which describe the scaling relationship between these two imaging-distance speckles. Obviously, there are three scaling centers that exist among these scaling vectors. Meanwhile, Fig. 5(e) shows the extracted connected components for mask “2FL” in the bottom right and the blue points denote the final detected multi-target locations. The number of the extracted connected components matches the number of the imaging targets and this multi-target scattering imaging problem is specified as the process of reconstructing three objects. After that, the autocorrelation of each imaging target as shown in Fig. 5(d) can be reconstructed via the operations as Eq. (14) in the selected speckle region centered by the detected position information in Fig. 5(e). Working with the modified phase retrieval algorithm, each imaging target can be reconstructed successfully and then put at its location to form a complete scene as shown in Fig. 5(b), which is visually close to the original one. The process of Figs. 5(h)–5(j) for target “01234” is the same as the process of Figs. 5(c)–5(e). The objective similarity evaluations between the reconstructions and the original targets by peak signal-to-noise ratio (PSNR) are shown in Table 1 and the detected target locations are shown in Figs. 5(b) and 5(g). Compared with the original scene in Figs. 5(a) and 5(f), these reconstructed results demonstrate that the multiple targets can be accurately positioned by scaling-vector-based position detection and the proposed method achieves multi-target large FOV blind reconstruction through scattering media in the real imaging system.
Figure 5.Tests on a real scattering imaging system. (a) The multi-target mask “2FL” with the detailed parameters as the imaging targets. (b) The final large FOV reconstruction with the detected position information. (c) The captured near-field speckle with . (d) The captured near-field speckle with and the extracted autocorrelation of each imaging target centered by the detected locations in (e). (e) The estimated scaling vectors (shown as the red arrows) by block matching and the detected locations (shown as the blue points). The connected component analysis result is shown in the bottom right in a smaller scale. (f)–(j) As in (a)–(e) for a larger and more complex scene “01234.” Scale bar, 50 camera pixels.
Scenes | Targets | PSNR (dB) | Averaged PSNR (dB) |
2FL (3.5 mm) | 2 | 17.6459 | 18.4545 |
F | 18.5682 |
L | 19.1494 |
01234 (3.5 mm) | 0 | 17.9687 | 19.7870 |
1 | 24.1154 |
2 | 18.4623 |
3 | 18.3745 |
4 | 20.0140 |
Table 1. PSNRs Between Reconstructions and Targets
Second, to test the applicability of our proposed method for large FOV fluorescent biological observation through scattering media, the neuron-shape scattering imaging experiments are conducted. The neuron-shape mask scaled from the real dendrites of hippocampal neurons [36] has the complex shapes of targets and distribution irregularity of target locations, which increases the difficulty for reconstruction but satisfies the requirements for practical scattering scenes. The mask was set as the imaging targets in the proposed scattering imaging system with other experimental conditions consistent with Fig. 5. Figure 6(a) shows the neuron-shape mask for reconstruction and the detailed parameters. The reconstructed whole scene via the proposed method is shown in Fig. 6(b) and the main features are faithfully recovered. Certainly, no prior information can be seen directly from the captured large FOV speckle, as shown in Fig. 6(c), which was recorded with as the Distance 1 plane. In principle, the presented millimeter-scale experiments can be scaled to micrometer- or meter-scale scenarios for scattering imaging exceeding the ME range.
Figure 6.Real tests for biological scattering observation. (a) The neuron-shape mask with the detailed parameters as the imaging targets. (b) The final reconstructed scene. (c) The captured near-field speckle with . Scale bar, 50 camera pixels.
C. Analyzing the Limitations
In the case that multiple targets are separated from each other as shown in Fig. 1, the proposed method blindly achieves multi-target localization and reconstruction exceeding the ME range. However, how to define the effective spacing between any two targets, beyond which the scaling-vector-based detection and scattering imaging algorithm works for multi-target speckles? To evaluate this limitation, we adjust the spacing between any two targets for mask “2FL” in Fig. 7(a) from 3.25 mm to 1.5 mm. Besides spacing, other parameters remain the same as those in Fig. 5(a), with the effective ME range keeping 2 mm in the object plane and the size of each imaging target falling within 0.5 mm. In this experimental environment, it is obvious that the reconstructed targets and locations are visually distinguishable when the spacing changes from 3.25 mm to 2.25 mm, as shown in Fig. 7(b). The estimated scaling vectors and detected locations with a spacing of 2.75 mm are shown in Fig. 7(d) as an example of reconstructions in good quality. As the spacing between any two targets reaches lower than 2.25 mm, the reconstructions are visually distorted, and the quality degrades. Meanwhile, the proposed blind position detection algorithm estimates the wrong number and the wrong locations of the imaging targets from the near-field speckles as shown in Fig. 7(e) with a spacing of 1.75 mm, which serves as misleading for the following reconstruction process. Objectively, the three-target averaged PSNRs curve between the reconstructions and the original targets with respect to the decreasing spacing is provided in Fig. 7(c). Theoretically, the autocorrelation of one selected target like Eq. (14) is noised more by other target autocorrelations with the decreasing spacing between any two targets, which makes the final reconstruction after phase retrieval worse. Meanwhile, according to Eq. (13), the captured multi-target speckle region centered by one detected target location ( as an example) is mainly composed of the speckle generated by and the speckles generated by other imaging targets only make a small contribution for this region. Therefore, the scaling vectors in this region regard the location of as the scaling center and there are multiple different scaling centers that exist between two imaging-distance speckles which correspond to the locations of different imaging targets. This is the reason why the proposed blind position detection algorithm can extract multi-target positions from two large FOV imaging-distance speckles, and Figs. 5(e) and 5(j) show the obvious boundaries of the speckle regions serving for different imaging targets. However, with the decreasing spacing, the weight differences in Eq. (13) are reduced and the scaling vectors in the speckle region centered by the location of are noised more by other imaging targets. That will result in the low-accuracy target localization, and even the wrong number of the identified imaging targets.
Figure 7.Real reconstructions for mask “2FL” when the spacing is decreasing from 3.25 mm to 1.5 mm. (a) The original imaging targets with detailed distance parameters. (b) The final reconstructed large FOV scenes corresponding to (a). (c) The averaged PSNRs curve between reconstructions and original targets with respect to the decreasing spacing. (d) The estimated scaling vectors and locations when spacing equals 2.75 mm as an example of reconstructions in good quality. (e) The estimated scaling vectors and locations when spacing equals 1.75 mm as an example of degraded reconstructions.
As for the times that the proposed algorithm can exceed the ME range, it is not restricted by theory, only if the camera array can capture the full speckles when the number of imaging targets increases.
4. CONCLUSION AND DISCUSSION
To summarize, we developed a multi-target large FOV scattering imaging method based on the blind target position detection. This technique only exploits two captured multi-target near-field speckles at different imaging distances, from which the target position cannot be seen directly. A major advantage of our approach is that target position information including the number and the locations of the imaging targets can be blindly reconstructed via the scaling-vector-based detection algorithm. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the whole scene of the multiple isolated targets exceeding the ME range can be successfully recovered. Unlike other methods of exceeding the ME range, no prior information of the target is required, which makes our technique more applicable for prior-information-free large FOV imaging and other scattering imaging techniques can cooperate with our proposed method to further improve the performance [25,26]. The real scattering imaging experiments demonstrate the effectiveness of the proposed method.
Actually, the detected target position information contains some errors compared with the initial multi-target positions. These errors come from three main sources: (1) the discrete sampling during camera acquisition and image processing; (2) the approximations of the theoretical derivation from Eq. (9) to Eq. (11) aiming to simplify the scattering model; and (3) a certain divergence angle of the experimental illumination, which makes the target positions change slightly from the object plane to the sensor plane. Actually, these errors did not affect the effectiveness of the proposed algorithm for target localization and reconstruction. On the other hand, the minimal effective spacing between any two targets for the proposed algorithm is not only limited by the target size and the properties of the scattering medium, but also the distance between the scattering medium and the camera sensor, and even the shape of the imaging target. In the future, a deeper research will be focused on this problem.
Additionally, the main time consumption of our proposed algorithm is spent in the process of estimating scaling vectors from two imaging-distance speckles and the time consumption will increase along with the increasing FOV of the captured speckles. When the resolution of the captured speckles in Figs. 5(c) and 5(d) is pixels, the length of the selected square region () is set as 400 pixels and the total time consumption in MATLAB 2018b is about 6.1 h. In addition, in the image processing part, the raw captured multi-target speckle with the slowly varying envelope is normalized by dividing the raw speckle by a low-pass-filtered version of it before calculating autocorrelations [5]. Especially in the process of estimating scaling vectors, the normalized speckle is further smoothed by a low-pass filter to remove the existing noises by camera which are spatially invariant at different imaging distances and improve the accuracy for localization.
Finally, the imaging distances, and , are not fixed in the proposed method, so as the gap between two near-field speckles, . In addition, these distance parameters were not used as the known information for reconstruction in this paper. In practice, the imaging distance can be adjusted to satisfy the requirements for different scenes, as long as the near-field scaling relationship between two imaging-distance speckles exists, and the light field techniques [37] may be applied to the proposed system to speed up the acquisition process. Meanwhile, the relationship between parameter and multiple variables (including , , , and TM) via statistical analysis will be the research plan in the future, aiming to dig out more useful information from two near-field speckles at different imaging distances. Furthermore, the proposed method can still work theoretically when the imaging targets are sandwiched between two scattering layers or looked around corners [5], which will be tested experimentally in the future work, and the raw data used to generate the results presented in this manuscript is available at https://cloud.tsinghua.edu.cn/d/296c066dbcc243839f52/.
Acknowledgment
Acknowledgment. We would like to thank Xiangsheng Xie for helpful discussions.
References
[1] J. W. Goodman. Speckle Phenomena in Optics: Theory and Applications(2007).
[2] S. Feng, C. Kane, P. A. Lee, A. D. Stone. Correlations and fluctuations of coherent wave transmission through disordered media. Phys. Rev. Lett., 61, 834-837(1988).
[3] I. Freund, M. Rosenbluh, S. Feng. Memory effects in propagation of optical waves through disordered media. Phys. Rev. Lett., 61, 2328-2331(1988).
[4] I. Freund. Looking through walls and around corners. Phys. A, 168, 49-65(1990).
[5] O. Katz, P. Heidmann, M. Fink, S. Gigan. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics, 8, 784-790(2014).
[6] J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, A. P. Mosk. Non-invasive imaging through opaque scattering layers. Nature, 491, 232-234(2012).
[7] X. Yang, Y. Pu, D. Psaltis. Imaging blood cells through scattering biological tissue using speckle scanning microscopy. Opt. Express, 22, 3405-3413(2014).
[8] L. Wang, P. P. Ho, C. Liu, G. Zhang, R. R. Alfano. Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate. Science, 253, 769-771(1991).
[9] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, J. G. Fujimoto. Optical coherence tomography. Science, 254, 1178-1181(1991).
[10] W. Denk, J. H. Strickler, W. W. Webb. Two-photon laser scanning fluorescence microscopy. Science, 248, 73-76(1990).
[11] G. Satat, M. Tancik, R. Raskar. Towards photography through realistic fog. IEEE International Conference on Computational Photography, 1-10(2018).
[12] S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, W. Choi. Imaging deep within a scattering medium using collective accumulation of single-scattered waves. Nat. Photonics, 9, 253-258(2015).
[13] V. Ntziachristos. Going deeper than microscopy: the optical imaging frontier in biology. Nat. Methods, 7, 603-614(2010).
[14] I. M. Vellekoop, A. P. Mosk. Focusing coherent light through opaque strongly scattering media. Opt. Lett., 32, 2309-2311(2007).
[15] A. P. Mosk, A. Lagendijk, G. Lerosey, M. Fink. Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics, 6, 283-292(2012).
[16] E. Edrei, G. Scarcelli. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media. Sci. Rep., 6, 33558(2016).
[17] H. Zhuang, H. He, X. Xie, J. Zhou. High speed color imaging through scattering media with a large field of view. Sci. Rep., 6, 32696(2016).
[18] X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, J. Zhou. Extended depth-resolved imaging through a thin scattering medium with PSF manipulation. Sci. Rep., 8, 4585(2018).
[19] S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, S. Gigan. Image transmission through an opaque material. Nat. Commun., 1, 81(2010).
[20] M. Mounaix, H. B. Aguiar, S. Gigan. Temporal recompression through a scattering medium via a broadband transmission matrix. Optica, 4, 1289-1292(2017).
[21] G. Kim, R. Menon. Computational imaging enables a see-through lens-less camera. Opt. Express, 26, 22826-22836(2018).
[22] L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, P. X. Chen. Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function. Opt. Lett., 43, 1670-1673(2018).
[23] S. K. Sahoo, D. Tang, C. Dang. Single-shot multispectral imaging with a monochromatic camera. Optica, 4, 1209-1213(2017).
[24] C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, X. Shao. Imaging through scattering layers exceeding memory effect range by exploiting prior information. Opt. Commun., 434, 203-208(2019).
[25] X. Wang, X. Jin, J. Li, X. Lian, X. Ji, Q. Dai. Prior-information-free single-shot scattering imaging beyond the memory effect. Opt. Lett., 44, 1423-1426(2019).
[26] A. Boniface, B. Blochet, J. Dong, S. Gigan. Noninvasive light focusing in scattering media using speckle variance optimization. Optica, 6, 1381-1385(2019).
[27] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 21, 2758-2769(1982).
[28] X. Jin, Z. Wang, X. Wang, Q. Dai. Depth of field extended scattering imaging by light field estimation. Opt. Lett., 43, 4871-4874(2018).
[29] P. Jain, S. E. Sarma. Measuring light transport using speckle patterns as structured illumination. Sci. Rep., 9, 11157(2019).
[30] Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, Z. Yaqoob. Dynamic speckle illumination wide-field reflection phase microscopy. Opt. Lett., 39, 6062-6065(2014).
[31] S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, S. Gigan. Characterization of the angular memory effect of scattered light in biological tissues. Opt. Express, 23, 13505-13516(2015).
[32] X. Jin, D. M. S. Wei, Q. Dai. Point spread function for diffuser cameras based on wave propagation and projection model. Opt. Express, 27, 12748-12761(2019).
[33] M. A. Van Dam, R. G. Lane. Wave-front slope estimation. J. Opt. Soc. Am., 17, 1319-1324(2000).
[34] J. Ko, C. C. Davis. Comparison of the plenoptic sensor and the Shack-Hartmann sensor. Appl. Opt., 56, 3689-3698(2017).
[35] R. M. Haralick, L. G. Shapiro. Computer and Robot Vision(1992).
[36] D. Brandner, G. Withers. Multipolar neuron, Rattus from CIL:2907(2010).
[37] G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, Y. Liu. Light field image processing: an overview. IEEE J. Sel. Top. Signal Process., 11, 926-954(2017).