• Advanced Photonics Nexus
  • Vol. 3, Issue 2, 026010 (2024)
Su Wu1、†, Chan Huang2, Jing Lin3, Tao Wang1、4, Shanshan Zheng1、4, Haisheng Feng1、4, and Lei Yu1、*
Author Affiliations
  • 1Chinese Academy of Sciences, Anhui Institute of Optics and Fine Mechanics, Hefei, China
  • 2Hefei University of Technology, School of Physics, Department of Optical Engineering, Hefei, China
  • 3Hefei Normal University, Department of Chemical and Chemical Engineering, Hefei, China
  • 4University of Science and Technology of China, Science Island Branch of Graduate School, Hefei, China
  • show less
    DOI: 10.1117/1.APN.3.2.026010 Cite this Article Set citation alerts
    Su Wu, Chan Huang, Jing Lin, Tao Wang, Shanshan Zheng, Haisheng Feng, Lei Yu. Physics-constrained deep-inverse point spread function model: toward non-line-of-sight imaging reconstruction[J]. Advanced Photonics Nexus, 2024, 3(2): 026010 Copy Citation Text show less

    Abstract

    Non-line-of-sight (NLOS) imaging has emerged as a prominent technique for reconstructing obscured objects from images that undergo multiple diffuse reflections. This imaging method has garnered significant attention in diverse domains, including remote sensing, rescue operations, and intelligent driving, due to its wide-ranging potential applications. Nevertheless, accurately modeling the incident light direction, which carries energy and is captured by the detector amidst random diffuse reflection directions, poses a considerable challenge. This challenge hinders the acquisition of precise forward and inverse physical models for NLOS imaging, which are crucial for achieving high-quality reconstructions. In this study, we propose a point spread function (PSF) model for the NLOS imaging system utilizing ray tracing with random angles. Furthermore, we introduce a reconstruction method, termed the physics-constrained inverse network (PCIN), which establishes an accurate PSF model and inverse physical model by leveraging the interplay between PSF constraints and the optimization of a convolutional neural network. The PCIN approach initializes the parameters randomly, guided by the constraints of the forward PSF model, thereby obviating the need for extensive training data sets, as required by traditional deep-learning methods. Through alternating iteration and gradient descent algorithms, we iteratively optimize the diffuse reflection angles in the PSF model and the neural network parameters. The results demonstrate that PCIN achieves efficient data utilization by not necessitating a large number of actual ground data groups. Moreover, the experimental findings confirm that the proposed method effectively restores the hidden object features with high accuracy.

    1 Introduction

    Non-line-of-sight (NLOS) imaging technique can recover information of a hidden object from light scattered by surrounding scenes.15 The light in the NLOS system originates from a pulsed laser or other sources to illuminate the diffuse reflective relay surface (rough wall, rock face, etc.); then the diffused light is incident on an object out of the line of sight and is scattered back to the relay surface. Detectors, such as a single-photon avalanche diode (SPAD) or a conventional camera, receive speckle images, which can recover shape, location, and albedo of the hidden object from multiple diffuse lights.68 NLOS imaging technology can utilize arbitrary walls as mirrors and holds the potential to revolutionize many critical applications, such as medical imaging, autonomous driving, remote sensing, and search and rescue operations.914

    The NLOS reconstruction problem is an inverse mathematical problem, aimed at recovering the hidden scene from the detected signal. Several challenges exist in NLOS imaging reconstruction. First, NLOS is an ill-posed problem characterized by a very low signal-to-noise ratio (SNR), resulting from environmental noise and high light loss along the scattered propagation path, rendering high-quality reconstruction challenging. Second, although the forward physical process is clearly understood, the physical model lacks clarity in handling multiple diffuse reflections in the NLOS system, making it difficult to obtain accurate values such as the direction of diffused light and energy attenuation. Furthermore, the inverse process is extremely complex. As a result, it is challenging to derive simple mathematical expressions directly, making high-quality image recovery for the NLOS system through a physical model difficult.

    The research on NLOS imaging dates back to 2009 when Kirmani15 proposed a framework utilizing time-of-flight camera imagery and transient reasoning to reveal scene properties inaccessible to traditional computer vision. Building on Kirmani’s work, Velten16 successfully recovered the three-dimensional shape of objects hidden around corners, combining time-of-flight techniques with computational reconstruction algorithms. Subsequently, O’Toole17 introduced a confocal NLOS imaging system. The confocal system, in contrast to traditional non-confocal NLOS systems, facilitates finding a closed solution to the NLOS problem and yields higher-quality image reconstructions. Expanding on the confocal system, researchers have developed methods like light-cone transformation,18 directional light-cone transformation,19 and virtual wavefronts20 for NLOS image restoration. However, these confocal methods employ the time-of-flight approach with time-resolving detectors (such as SPAD). As a result, the system requires data capture via scanning. To ensure clarity in NLOS image reconstruction, this approach necessitates scanning numerous points, often exceeding a measurement time of 10 min. Consequently, the prolonged data acquisition required renders these methods unsuitable for real-time NLOS imaging applications.

    With the development of machine learning and neural networks, researchers have proposed data-driven algorithms for NLOS image reconstruction. Chen et al.21 introduced a trainable architecture that maps diffuse indirect reflections to scene reflectance, relying solely on synthetic training data. To overcome the long scan time associated with traditional systems. Metzler22 employed a plane array complementary metal-oxide-semiconductor (CMOS) detector to capture speckle images within a second. However, data acquisition in NLOS imaging remains cumbersome, and there is currently a scarcity of real large-scale data sets. These synthetic images are based on the assumption that the relay wall is a standard Lambertian surface. However, in reality, the wall often deviates from a standard Lambertian surface, not conforming to isotropic theory.

    The point spread function (PSF) is a core concept in image reconstruction. It describes the spot formed by a point light source after traversing the optical system, serving as a crucial link in the object-image conversion process of the optical system.2330 By understanding both the PSF and the image produced by the optical system, information about the object’s surface can be retrieved through deconvolution. This technique has widespread applications in various fields, including astronomy,31,32 microscopy,24,33 and medical imaging,34,35o=F1(I/Φ),where F1 represents inverse Fourier transformation, o is the object information detected by the camara, and I and Φ are Fourier transformations of the image and PSF matrix, respectively.

    Establishing a PSF model is crucial for image reconstruction in scattering and diffuse reflection systems. Faber35 developed a PSF model for weakly scattering media within an optical coherence tomography system, enabling the quantitative measurement of attenuation coefficients. By manipulating a specific single PSF, Xie et al.36 achieved depth-resolved imaging of thin scattering media, extending beyond the original depth of field. In the context of NLOS imaging systems, Pei et al.37 calculated the PSF employing a Gaussian-shaped laser pulse and the Poisson noise of a time-resolved camera. However, this model had limitations in accurately reflecting the NLOS scattered propagation process.

    This work introduces a novel NLOS imaging recovery model that addresses these limitations, incorporating advancements in both the physical model and the computational reconstruction algorithm. We developed an accurate forward PSF model using ray tracing for the NLOS system, offering a physical constraint for an untrained neural network. Contrary to previous methods that assumed perfect isotropic reflectance, our proposed method takes into account the randomness of actual reflection angles on the relay wall. Furthermore, our method does not necessitate training data sets to ascertain neural network parameters, setting it apart from conventional deep-learning-based approaches. Instead, it starts from the random initialization parameters of the neural network, constrained by the forward physical model and speckle image, and iteratively employs the gradient descent algorithm to estimate parameters and establish the mapping relationship. Experimental data were employed to validate the proposed NLOS image reconstruction algorithm. Specifically, we make the following contributions:

    2 Materials and Methods

    2.1 PSF Model for NLOS Imaging

    The experimental setup for the NLOS imaging system is presented in Fig. 1(a). A laser, emitting at a wavelength of 632.8 nm, is expanded and collimated by a lens group before illuminating the wall. The light scatters toward the hidden object, reflects back to the relay surface, and is captured as a speckle image by a CMOS camera. In the speckle image, each pixel corresponds to a point on the original object via the PSF matrix, depicted in Fig. 1(b), where o represents the hidden object, φ is the PSF matrix of the NLOS imaging system, and i represents the captured speckle image. To elucidate this relationship further, Fig. 1(c) presents a simple example of a hidden object and its corresponding speckle image as captured by the camera. This experimental arrangement and the associated data lay the groundwork for assessing the efficacy of the proposed NLOS image reconstruction method.

    The NLOS system and reconstruction principle. (a) A confocal NLOS imaging system with a CMOS camera to capture the image. (b) The imaging equation in an optical system with PSF and (c) propagation process from object to image in the NLOS system.

    Figure 1.The NLOS system and reconstruction principle. (a) A confocal NLOS imaging system with a CMOS camera to capture the image. (b) The imaging equation in an optical system with PSF and (c) propagation process from object to image in the NLOS system.

    In this context, the relay wall of the NLOS imaging system can be conceptualized as a mirror exhibiting aberrations. This conceptualization allows for the determination of the NLOS imaging system’s PSF through wavefront aberration analysis. Specifically, the PSF of a coherent optical system is expressed as PSF=P(u,v)exp[2πi(ux+vy)]dudv,where P(u,v) is the pupil function of the system, u, v is the spatial frequency, which can be represented as u=xλdi, v=yλdi by the pupil rectangular coordinates x, y, and d is the distance from the image plane to the exit pupil. The pupil function is shown as: {P(x,y)=A(x,y)exp[ikW(x,y)]A(x,y)=1,rr0A(x,y)=0,r>r0.

    The amplitude component of the pupil function, denoted as A(x,y), is a function of the pupil shape. r0 is the radius of the exit pupil. The parameter k is equal to 2π/λ, where λ represents the wavelength of the light source. W(x,y) represents the wave aberration of the optical system.

    The scattered light-propagation diagram is shown in Fig. 2. The diffuse reflection wall is considered as the combination of both specular reflection and diffuse reflection. Under the ideal imaging condition, the diffuse reflector is akin to a fully specular reflector. Consequently, the imaging lens group and the diffuse reflection wall form an optical system where reverse tracing is performed on the detector to obtain the position and radius of the exit pupil for the NLOS optical system. Then, the relay wall is regarded as an optical element with aberrations, and the optical source in this system can be considered as a laser diffusely reflected by the relay surface. Here, we employ an improved diffuse reflection model proposed by Wolff et al.38 in which the diffused surface is represented by microfacets arranged in V-grooves, distributed over various orientations. The diffused reflected radiance is formulated as a combination of the reflection radiance from microfacets, which accounts for masking and shadowing, and the reflection radiance due to interreflections, Lr(θr,θi,ϕrϕi;σ)=Lr1(θr,θi,ϕrϕi;σ)+Lr2(θr,θi,ϕrϕi;σ)=ρπLicosθi{A+BMax[0,cos(ϕrϕi)]×sinαtanβ}+0.17ρ2πLicosθiσ2σ2+0.13×[1cos(ϕrϕi)(2βπ)2],where A=10.5σ2σ2+0.33, B=0.45σ2σ2+0.09, σ is the standard deviation of the Gaussian distribution as a measure of surface roughness, α=Max[θr,θi], β=Min[θr,θi], and ρ is the diffuse albedo as defined by Lambert’s Law. Thus, the radiance from the relay wall to the hidden object for the first diffused reflection is derived as L1=ρπL0cosθi1{A1+B1Max[0,cos(ϕr1ϕi1)]×sinα1tanβ1}+0.17ρ2πL0cosθi1σ2σ2+0.13×[1cos(ϕr1ϕi1)(2β1π)2],where L0 is the radiance of the laser after beam expansion and ϕi1 and θi1 are the azimuth and incident angles of the incident light, respectively. For the NLOS system in this study, these three parameters are constant. The radiance from a measured object is given as LO=ρOπL1cosθiO{AO+BOMax[0,cos(ϕrOϕiO)]×sinαOtanβO}+0.17ρO2πL1cosθiOσO2σO2+0.13×[1cos(ϕrOϕiO)(2βOπ)2],where O is the hidden object, and the speckle image captured by the CMOS detector is I=ρπL2cosθi2{A2+B2Max[0,cos(ϕr2ϕi2)]×sinα2tanβ2}+0.17ρ2πL2cosθi2σ2σ2+0.13×[1cos(ϕr2ϕi2)(2β2π)2].

    Light path in the NLOS system. (a) Wavefront propagation process of diffuse reflection and (b) definition of diffuse reflection parameters.

    Figure 2.Light path in the NLOS system. (a) Wavefront propagation process of diffuse reflection and (b) definition of diffuse reflection parameters.

    For an optical system, the image captured by the detector can also be expressed by I=OΦ+N.

    Equation (8) represents the expression with noise depicted in Fig. 1(b) in the frequency domain. Here, Φ signifies the PSF matrix of the NLOS system, and N symbolizes the system’s noise. Within the NLOS system, the predominant noises include photon noise, represented by Gaussian noise, and background noise, which appears as a peak and a uniform offset in the speckle image.22 To mitigate the impact of these noises on image reconstruction, regularization techniques in deep learning and data preprocessing strategies are implemented.

    Retrieval of the hidden object image in the NLOS system relies on solving Eqs. (3), (7), and (8) simultaneously. The precision of the PSF matrix, especially the exact exit angle of diffuse reflection in the exit pupil, is critical for the success of this retrieval process. If the exit angle is ascertainable, the system’s wave aberration can be deduced through reverse tracking, facilitating the derivation of the NLOS system’s PSF. However, modeling the relay surface accurately becomes challenging with multiple diffuse reflections, thereby complicating the attainment of the precise angle θi2 of diffused light.

    2.2 Physics-Constrained Inverse Network

    A critical challenge in NLOS imaging is the unknown specifics, such as the size and location of the hidden object, and the PSF of the optical system varying with position and field of view. Consequently, accurately modeling the PSF of the NLOS system solely based on physical theory is not feasible. Deep-learning methods have been explored for computational imaging. In this approach, object reconstruction is achieved through the solution of an optimization problem. convolutional neural network (CNN), as one of the methods in deep learning, has been widely used in superresolution imaging,3941 lensless imaging,42,43 imaging estimation through scattering media,44,45 etc. The utilization of CNN is deemed effective in modeling the PSF matrix for NLOS imaging, as highlighted in the preceding study.37 However, being a primarily data-driven technique, CNN significantly relies on the volume of measured data. Its accuracy in modeling the PSF matrix is also reliant on the amount of available truth data. Additionally, this approach lacks constraints from physical models, leading to the neglect of a priori information. Conversely, while the forward physical process of the PSF in NLOS is clear, directly obtaining angle parameters in the model is challenging due to multiple diffuse reflections. In this paper, we propose a method known as PCIN, which combines the advantages of a neural network and a physical model. This method integrates the PSF model into traditional CNN architecture and introduces a neural network that enables NLOS reconstruction without the necessity for data training. We name this approach PCIN, with its workflow illustrated in Fig. 3. To initiate the process of NLOS image recovery using the PCIN algorithm, the speckle image is input into the CNN network with random initial weights and the angles in the forward model. The procedure for NLOS image recovery utilizing the PCIN algorithm unfolds as follows:

    Flowchart of PCIN algorithm for NLOS imaging reconstruction. The speckle image captured by the camera is put into CNN, and PCIN iteratively updates the parameters in CNN using the loss function constructed by the speckle image and forward physical model. The optimized parameters are utilized to obtain a high-quality reconstructed image.

    Figure 3.Flowchart of PCIN algorithm for NLOS imaging reconstruction. The speckle image captured by the camera is put into CNN, and PCIN iteratively updates the parameters in CNN using the loss function constructed by the speckle image and forward physical model. The optimized parameters are utilized to obtain a high-quality reconstructed image.

    The proposed method leverages the CNN’s robust modeling capabilities to construct an inverse physical model neural network, representing the inverse physical processes of NLOS. Contrary to traditional deep-learning-based approaches, this method does not necessitate extensive training data sets to establish the parameters of this neural network. Instead, it employs a gradient descent algorithm with alternating iterations to optimize both the neural network parameters and the unknown parameters in the forward model. This optimization is constrained by the forward physical model and the measured speckle image, enabling the estimation of parameters in both the neural network and the forward model, and ultimately deriving the mapping relations. Therefore, the reconstruction of the NLOS system can be retrieved by solving the optimization problem, R=argminP,θ,ϕI^I22+TV(O^),where TV stands for total variation regularization and O^ for reconstruction image obtained by CNN. I^ for the speckle image calculated by the forward PSF physical model and reconstruction image O^. Upon obtaining the optimized weight P and diffused angle θ, the NLOS recovered image is estimated and outputted as the final layer of the PCIN. Considering both resolution and calculation speed, the size of the measured image is selected as 512  pixels×512  pixels. The network is implemented in PyTorch. The renowned U-net architecture is employed for our CNN, utilizing the ADAM optimizer with a learning rate set to 0.01. All computations were executed on an Nvidia GTX 3090 GPU to guarantee computational efficiency and accuracy.

    3 Results

    In this section, we present the experimental validation of the proposed method. The experimental setup, shown in Fig. 4, employs a laser with a wavelength of 632.8 nm and an optical power of 5 mW as the light source. A lens group expands the beam, increasing the collimated beam diameter to 3 mm, thereby illuminating the hidden object. The Dyhana 4040 CMOS camera with a sensitive area of 36.9  mm×36.9  mm and a field of view of 40 deg is chosen as the detection device, which is capable of capturing information from the NLOS system after 3 times of diffuse reflection. To adequately capture the hidden object’s information without overexposure, the camera is set to capture images with a 40 ms exposure time.

    Back and front of the experimental scene. Light passes from the laser, to the collimator, to the wall, to the hidden object, and finally to the camera.

    Figure 4.Back and front of the experimental scene. Light passes from the laser, to the collimator, to the wall, to the hidden object, and finally to the camera.

    To evaluate the effectiveness of our proposed methodology, four letters were chosen for imaging experiments. Specifically, the light source was positioned 1.2 m from the relay wall with an incident angle of 15 deg, and the hidden object was placed with a 1-m separation from both the camera and the relay wall.

    The reconstruction results of different exposure time and different postures for the selected hidden objects are presented in Figs. 5 and 6, illustrating the capability of the proposed PCIN method to reconstruct the shape of hidden objects from diffused images. It is noteworthy that with camera exposure time of fewer than 20 ms, the algorithm is generally unable to complete the reconstruction due to insufficient information capture within such a brief period. As the exposure time increases, there is a corresponding enhancement in the accuracy and detail of the reconstructed image. At an exposure time of 40 ms, the detailed features of the object are essentially reconstructed. However, increased exposure time inevitably leads to more noise in the system, manifesting as poorer reconstruction quality at the image edges.

    Comparison of the reconstructed images of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    Figure 5.Comparison of the reconstructed images of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    Comparison of the reconstructed images of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    Figure 6.Comparison of the reconstructed images of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    Encouragingly, in the initial run, the network required more than 4000 iterations and took several minutes to achieve satisfactory results. For subsequent runs involving the same object type, the optimization time was reduced to approximately 1800 iterations by leveraging the previous run’s optimization results as input. Figure 6 displays the NLOS imaging reconstruction results following posture changes. The results indicate that the proposed method can precisely recover fine features and accurately determine the position and posture of hidden objects.

    For further validation of the algorithm’s performance, we chose more complex subjects, including cartoon images and Chinese characters, as hidden objects. Similarly, the detector’s exposure time varied from 10 to 40 ms. The inversion results of the algorithm are shown in Fig. 7. With the increased complexity of the object, shorter exposure time (10 to 20 ms) prove inadequate for reconstructing the hidden object. This suggests that with complex hidden objects, shorter exposure time fail to capture sufficient effective information. When the exposure time is increased to 30 ms, the algorithm can essentially reconstruct the approximate shape of the object under examination. At an exposure time of 40 ms, the algorithm fully reconstructs the shape of hidden objects, capturing relatively fine features as well, demonstrating its efficacy in reconstructing complex objects.

    Comparison of the reconstructed cartoon images and Chinese characters of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    Figure 7.Comparison of the reconstructed cartoon images and Chinese characters of various exposure time from the proposed PCIN method. (a) Speckle images of different exposure time captured by the camera. (b) Ground truth. (c) Reconstructed images of different exposure time.

    To evaluate the algorithm’s adaptability to diffusely reflecting walls of various shapes, we fabricated concave, convex, and wavy diffusely reflecting walls using highly flexible white foam. The camera exposure time was set to 40 ms to capture more information about the hidden objects. The reconstructed images, as illustrated in Fig. 8, reveal that surface variations of the diffusely reflecting walls lead to differences in the reconstruction of the same object. Nevertheless, the reconstruction was generally successful. The wavy surface resulted in the poorest reconstruction outcome. This is attributed to the creases of the wavy surface acting as light traps, causing the light to undergo multiple diffuse reflections. Consequently, the quantity of light carrying information about the target object that enters the detector is diminished, reducing the precision and detail of the reconstructed images.

    Comparison of the reconstructed images of convex, concave, and wavy walls.

    Figure 8.Comparison of the reconstructed images of convex, concave, and wavy walls.

    The NLOS reconstruction can be seen as a phase-retrieval (PR) problem. We compared the proposed PCIN method with the alternating minimization PR algorithm (Alt-Min) from Ref. 46 and the traditional CNN method from Ref. 22. The training data set for CNN is synthesized by a physical model. For better reconstruction results, the exposure time is chosen as 40 ms. The comparative results in Fig. 9 demonstrate that both the PCIN- and CNN-based methods outperform traditional PR methods in terms of reconstruction quality within the same exposure time. Subsequently, the incidence angle was adjusted to 10 deg. Notably, the CNN network model utilized parameters trained in the previous step, rather than undergoing retraining. The reconstructed images in Fig. 10 illustrate the limited universality of the traditional CNN network, highlighting its inapplicability in changing external environments. This implies that both the PR algorithm and our proposed PCIN algorithm excel in reconstructing NLOS images amid external environmental changes, whereas the deep-learning approach necessitates generating new training data and retraining for each new scene.

    Comparison of the reconstructed images of PR, CNN, and PCIN methods at 40 ms exposure time.

    Figure 9.Comparison of the reconstructed images of PR, CNN, and PCIN methods at 40 ms exposure time.

    Comparison of the reconstructed images of PR, CNN, and PCIN methods after a 10 deg change in image plane inclination.

    Figure 10.Comparison of the reconstructed images of PR, CNN, and PCIN methods after a 10 deg change in image plane inclination.

    The reconstruction results under a 20 ms exposure time are shown in Fig. 11. Under conditions of poor SNR, the PR algorithm is ineffective in reconstructing NLOS imaging results. Conversely, both CNN and the proposed PCIN method demonstrate strong noise robustness.

    Comparison of the reconstructed images of PR, CNN, and PCIN methods at 20 ms exposure time.

    Figure 11.Comparison of the reconstructed images of PR, CNN, and PCIN methods at 20 ms exposure time.

    The PR algorithm was initialized with spectral initializers and used default parameters.47 To optimize for the best reconstruction, several minutes were used for PR and 38 h were used for CNN to train parameters for net. Under low exposure conditions, the number of optimization iterations for each algorithm increases, yet the overall time remains relatively consistent. From the above analysis, it is evident that both PR and PCIN methods do not require extensive data and exhibit superior adaptability to various scenes compared to the CNN method. Regarding runtime, the PR and the proposed model operate on a similar scale, with both taking approximately several minutes. While there are variations in the results depending on different objects, these variations are not markedly significant. However, at low SNRs, PR fails to reconstruct hidden objects, while both PCIN and CNN exhibit robust noise resistance.

    4 Discussion and Conclusion

    In this study, we present a novel theoretical framework for NLOS imaging based on a PSF physical model. The proposed approach incorporates wave aberration theory and reverse tracking to determine the pupil and obtain the PSF model of the NLOS system. Additionally, we introduce an innovative inverse network framework, embedding a physics-constrained neural network, to optimize unknown parameters in the physical model via neural network iteration. This method achieves precise reconstruction outcomes through mutual feedback between the neural network and the physical model. Although involving an iterative process and potentially time-consuming for reconstructions, this method significantly eliminates the need for paired data sets during training. Consequently, this results in substantial time savings in data preparation.

    Experimental validation on NLOS imaging data confirms the method’s success in reconstructing hidden objects from a single measured speckle image with a 40 ms exposure time using a traditional CMOS detector. The combination of the PSF model and deep learning demonstrates potential for NLOS imaging in complex environments, such as rescue operations and field exploration, and represents a significant advancement towards high-resolution NLOS imaging.

    In summary, this method fundamentally optimizes the traditional physical model using deep-learning techniques. In the NLOS system, due to the random nature of diffuse reflections, both the forward and reverse models cannot be obtained precisely. Specifically, the emergence angle at the optical pupil in the forward tracing cannot be determined, which prevents the direct establishment of the PSF matrix in the reverse model. However, by integrating deep learning with the physical model, the emergence angle in the physical model can be optimized, enabling the reconstruction of an image from a speckle image without training data or ground truth. Our proposed algorithm achieves superior performance that neither the physical model nor deep learning alone can achieve. This makes it ideal for scenarios like hostage rescue or intelligent driving in complex environments, where extensive real measurement data for training is not available. Unlike deep-learning methods that rely on specific scenes, our algorithm can be applied to a wide range of scenes and scenarios. Moreover, our model more accurately mirrors the real NLOS propagation process compared to the traditional, simplified NLOS physical model. Nonetheless, the model still has some limitations, such as deviations in reconstruction accuracy, particularly at the edge, and longer running time. Our future work will be focused on enhancing the algorithm’s performance, accuracy, and running speed to enable real-time rapid NLOS reconstruction.

    Biographies of the authors are not available.

    References

    [1] X. Liu et al. Non-line-of-sight imaging using phasor-field virtual wave optics. Nature, 572, 620-623(2019).

    [2] B. David et al. Wave-based non-line-of-sight imaging using fast f-k migration. ACM Trans. Graph., 38, 116(2019).

    [3] T. Maeda et al. Thermal non-line-of-sight imaging, 1-11(2019).

    [4] Y. Altmann et al. Quantum-inspired computational imaging. Science, 361, eaat2298(2018).

    [5] W. Yang et al. None-line-of-sight imaging enhanced with spatial multiplexing. Opt. Express, 30, 5855-5867(2022).

    [6] J. H. Nam et al. Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second. Nat. Commun., 12, 6526(2021).

    [7] F. Heide et al. Non-line-of-sight imaging with partial occluders and surface normals. ACM Trans. Graph., 38, 1-10(2019).

    [8] C. Saunders, J. Murray-Bruce, V. K. Goyal. Computational periscopy with an ordinary digital camera. Nature, 565, 472(2019).

    [9] J. Liu et al. Photon-efficient non-line-of-sight imaging. IEEE Trans. Comput. Imaging, 8, 639-650(2022).

    [10] D. Faccio, A. Velten, G. Wetzstein. Non-line-of-sight imaging. Nat. Rev. Phys., 2, 318-327(2020).

    [11] F. Mu et al. Physics to the rescue: deep non-line-of-sight reconstruction for high-speed imaging. IEEE Trans. Pattern Anal. Mach. Intell., 1-12(2022).

    [12] T. Maeda et al. Recent advances in imaging around corners(2019).

    [13] C. A. Metzler, D. B. Lindell, G. Wetzstein. Keyhole imaging: non-line-of-sight imaging and tracking of moving objects along a single optical path. IEEE Trans. Comput. Imaging, 7, 1-12(2020).

    [14] D. Zhu, W. Cai. Fast non-line-of-sight imaging with two-step deep remapping. ACS Photonics, 9, 2046-2055(2022).

    [15] A. Kirmani et al. Looking around the corner using transient imaging, 159-166(2009).

    [16] A. Velten et al. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun., 3, 745(2012).

    [17] M. O’Toole, D. B. Lindell, G. Wetzstein. Real-time non-line-of-sight imaging, 1-2(2018).

    [18] M. O’Toole, D. B. Lindell, G. Wetzstein. Confocal non-line-of-sight imaging based on the light-cone transform. Nature, 555, 338-341(2018).

    [19] S. I. Young et al. Non-line-of-sight surface reconstruction using the directional light-cone transform, 1407-1416(2020).

    [20] X. Liu, S. Bauer, A. Velten. Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems. Nat. Commun., 11, 1645(2020).

    [21] W. Chen et al. Steady-state non-line-of-sight imaging, 6790-6799(2019).

    [22] C. A. Metzler et al. Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging. Optica, 7, 63-71(2020).

    [23] S. Tong, A. M. Alessio, P. E. Kinahan. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation. Phys. Med. Biol., 55, 1453(2010).

    [24] J. D. Rego, K. Kulkarni, S. Jayasuriya. Robust lensless image reconstruction via PSF estimation, 403-412(2021).

    [25] G. Demoment. Image reconstruction and restoration: overview of common estimation structures and problems. IEEE Trans. Acoust. Speech Signal Process., 37, 2024-2036(1989).

    [26] S. Stute, C. Comtat. Practical considerations for image-based PSF and blobs reconstruction in PET. Phys. Med. Biol., 58, 3849(2013).

    [27] A. K. Trull et al. Point spread function based image reconstruction in optical projection tomography. Phys. Med. Biol., 62, 7784(2017).

    [28] B. Zhu et al. Image reconstruction by domain-transform manifold learning. Nature, 555, 487-492(2018).

    [29] L. Song et al. High-accuracy image formation model for coded aperture snapshot spectral imaging. IEEE Trans. Comput. Imaging, 8, 188-200(2022).

    [30] K. C. Lee et al. Design and single-shot fabrication of lensless cameras with arbitrary point spread functions. Optica, 10, 72(2023).

    [31] H. J. Zhu, B. C. Han, B. Qiu. Survey of astronomical image processing methods. Lect. Notes Comput. Sci., 9219, 420-429(2015).

    [32] J. Smith et al. Image-to-image translation for wavefront and PSF estimation. Proc. SPIE, 12185, 121852L(2022).

    [33] S. Ludwig et al. Image reconstruction and enhancement by deconvolution in scatter-plate microscopy. Opt. Express, 27, 23049-23058(2019).

    [34] X. Shao et al. Applications of both time of flight and point spread function in brain PET image reconstruction. Nucl. Med. Commun., 37, 422-427(2016).

    [35] D. Kidera et al. The edge artifact in the point-spread function-based PET reconstruction at different sphere-to-background ratios of radioactivity. Ann. Nucl. Med., 30, 97-103(2016).

    [36] X. Xie et al. Extended depth-resolved imaging through a thin scattering medium with PSF manipulation. Sci. Rep., 8, 4585(2018).

    [37] C. Pei et al. Dynamic non-line-of-sight imaging system based on the optimization of point spread functions. Opt. Express, 29, 32349-32364(2021).

    [38] L. B. Wolff, S. K. Nayar, M. Oren. Improved diffuse reflection models for computer vision. Int. J. Comput. Vis., 30, 55-71(1998).

    [39] C. Tian et al. Coarse-to-fine CNN for image super-resolution. IEEE Trans. Multimedia, 23, 1489-1502(2020).

    [40] J. Yamanaka, S. Kuwashima, T. Kurita. Fast and accurate image super resolution by deep CNN with skip connection and network in network. Lect. Notes Comput. Sci., 10635, 217-225(2017).

    [41] Z. Wang et al. Multi-memory convolutional neural network for video super-resolution. IEEE Trans. Image Process., 28, 2530-2544(2018).

    [42] R. Kuschmierz et al. Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks. Light Adv. Manuf., 2, 415-424(2021).

    [43] H. Zhu et al. DNF: diffractive neural field for lensless microscopic imaging. Opt. Express, 30, 18168-18178(2022).

    [44] Q. Li et al. Imaging reconstruction through strongly scattering media by using convolutional neural networks. Opt. Commun., 477, 126341(2020).

    [45] Y. Sun et al. Image reconstruction through dynamic scattering media based on deep learning. Opt. Express, 27, 16032-16046(2019).

    [46] P. Netrapalli, P. Jain, S. Sanghavi. Phase retrieval using alternating minimization, 2796-2804(2013).

    [47] E. J. Candes, X. Li, M. Soltanolkotabi. Phase retrieval via Wirtinger flow: theory and algorithms. IEEE Trans. Inf. Theory, 61, 1985-2007(2015).

    Su Wu, Chan Huang, Jing Lin, Tao Wang, Shanshan Zheng, Haisheng Feng, Lei Yu. Physics-constrained deep-inverse point spread function model: toward non-line-of-sight imaging reconstruction[J]. Advanced Photonics Nexus, 2024, 3(2): 026010
    Download Citation