• Advanced Imaging
  • Vol. 1, Issue 1, 011003 (2024)
Peng-Yu Jiang1、2、†, Zheng-Ping Li1、2、3, Wen-Long Ye1、2, Ziheng Qiu1、2, Da-Jian Cui3、4, and Feihu Xu1、2、3、*
Author Affiliations
  • 1Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei, China
  • 2Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai, China
  • 3Hefei National Laboratory, University of Science and Technology of China, Hefei, China
  • 4Quantum Information Chip & Device Chongqing Key Laboratory, Chongqing, China
  • show less
    DOI: 10.3788/AI.2024.10001 Cite this Article
    Peng-Yu Jiang, Zheng-Ping Li, Wen-Long Ye, Ziheng Qiu, Da-Jian Cui, Feihu Xu. High-resolution 3D imaging through dense camouflage nets using single-photon LiDAR[J]. Advanced Imaging, 2024, 1(1): 011003 Copy Citation Text show less
    Schematic of the 3D sub-voxel scanning method. (a) An illustration of sub-pixel scanning using the SPAD array with 2 pixel × 2 pixel and an inter-pixel spacing of 1/2 FoV. (b) A scheme of sub-bin scanning in time domain with 3 steps. (c) One original voxel in the measurement matrix can be expanded to 5×5×10 sub-voxels after fine scanning.
    Fig. 1. Schematic of the 3D sub-voxel scanning method. (a) An illustration of sub-pixel scanning using the SPAD array with 2 pixel × 2 pixel and an inter-pixel spacing of 1/2 FoV. (b) A scheme of sub-bin scanning in time domain with 3 steps. (c) One original voxel in the measurement matrix can be expanded to 5×5×10 sub-voxels after fine scanning.
    Numerical simulation of the proposed algorithm. The ground truth is a typical scene from the Middlebury dataset. In our simulation, the SBR is set to 0.2, and the average number of detected signal photons is set to 10, 5, and 1 PPP, respectively. The first column shows the results without fine scanning. The second column shows the results with only 2D sub-pixel scanning and conventional pixel-wise ML processing. The third column shows the results with 3D sub-voxel scanning and pixel-wise ML processing. The last column shows the results with 3D sub-voxel scanning and our photon-efficient 3D deconvolutional algorithm. Quantitative results in terms of the RMSE are shown at the bottom of each figure. Clearly, our 3D sub-voxel method combined with the proposed algorithm has a smaller RMSE and superior performance to exhibit the details of the images.
    Fig. 2. Numerical simulation of the proposed algorithm. The ground truth is a typical scene from the Middlebury dataset. In our simulation, the SBR is set to 0.2, and the average number of detected signal photons is set to 10, 5, and 1 PPP, respectively. The first column shows the results without fine scanning. The second column shows the results with only 2D sub-pixel scanning and conventional pixel-wise ML processing. The third column shows the results with 3D sub-voxel scanning and pixel-wise ML processing. The last column shows the results with 3D sub-voxel scanning and our photon-efficient 3D deconvolutional algorithm. Quantitative results in terms of the RMSE are shown at the bottom of each figure. Clearly, our 3D sub-voxel method combined with the proposed algorithm has a smaller RMSE and superior performance to exhibit the details of the images.
    Schematic diagram of the experimental setup. (a) The schematic diagram of our experimental setup. The complex scenario is hidden behind a double-layer camouflage net, which is imaged by a visible-light camera, a MWIR camera, and our single-photon LiDAR system. (b) The photograph of the hidden scene. (c) The photograph of our experimental setup. (d) The photograph during the experiment with the glass door closed as an obstruction in the imaging path.
    Fig. 3. Schematic diagram of the experimental setup. (a) The schematic diagram of our experimental setup. The complex scenario is hidden behind a double-layer camouflage net, which is imaged by a visible-light camera, a MWIR camera, and our single-photon LiDAR system. (b) The photograph of the hidden scene. (c) The photograph of our experimental setup. (d) The photograph during the experiment with the glass door closed as an obstruction in the imaging path.
    Calibration results of our imaging system. (a) The calibration result of the PSF2D including 5 pixel × 5 pixel. (b) The calibration result of Box(t)*IRF(t). The red line is the Gaussian fit of the raw data. (c) The 3D model of the resolution chart. (d) The results are captured by single-photon LiDAR without fine scanning, with 2D sub-pixel scanning, and with 3D sub-voxel scanning, respectively.
    Fig. 4. Calibration results of our imaging system. (a) The calibration result of the PSF2D including 5 pixel × 5 pixel. (b) The calibration result of Box(t)*IRF(t). The red line is the Gaussian fit of the raw data. (c) The 3D model of the resolution chart. (d) The results are captured by single-photon LiDAR without fine scanning, with 2D sub-pixel scanning, and with 3D sub-voxel scanning, respectively.
    Experimental results of the static scenario behind the camouflage nets in daylight. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, (d) single-photon LiDAR with 3D sub-voxel scanning, and (e) timing histogram of the data in (d).
    Fig. 5. Experimental results of the static scenario behind the camouflage nets in daylight. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, (d) single-photon LiDAR with 3D sub-voxel scanning, and (e) timing histogram of the data in (d).
    Experimental results of the static scenario behind the camouflage nets in daylight with a glass door. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, and (d) single-photon LiDAR with 3D sub-voxel scanning. (a) and (b) are applied with contrast enhancement for better visual effects.
    Fig. 6. Experimental results of the static scenario behind the camouflage nets in daylight with a glass door. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, and (d) single-photon LiDAR with 3D sub-voxel scanning. (a) and (b) are applied with contrast enhancement for better visual effects.
    Experimental results of the static scenario behind the camouflage nets at night. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, and (d) single-photon LiDAR with 3D sub-voxel scanning. (a) and (b) are applied with contrast enhancement for better visual effects.
    Fig. 7. Experimental results of the static scenario behind the camouflage nets at night. The results of the (a) visible-light camera, (b) MWIR camera, (c) single-photon LiDAR without fine scanning, and (d) single-photon LiDAR with 3D sub-voxel scanning. (a) and (b) are applied with contrast enhancement for better visual effects.
    Experimental results of the moving scenario behind the camouflage nets. (a) The photographs of the scene behind the camouflage nets taken from the back. (b) The photographs are captured by a MWIR camera. (c) The photographs are captured by a visible-light camera. (d) The reconstructed 3D profile of the multi-layer scenario. The movement of the mannequin and the basketball can be seen in the image sequences. The boxes of different colors indicate the segmented objects in our experiment.
    Fig. 8. Experimental results of the moving scenario behind the camouflage nets. (a) The photographs of the scene behind the camouflage nets taken from the back. (b) The photographs are captured by a MWIR camera. (c) The photographs are captured by a visible-light camera. (d) The reconstructed 3D profile of the multi-layer scenario. The movement of the mannequin and the basketball can be seen in the image sequences. The boxes of different colors indicate the segmented objects in our experiment.
    Comparison of 3D reconstruction methods for moving targets behind the camouflage nets. The data are the same as the first frame displayed in Fig. 8. Reconstruction results of the (a) cross-correlation, (b) real-time plug-and-play denoiser[15], and (c) proposed method.
    Fig. 9. Comparison of 3D reconstruction methods for moving targets behind the camouflage nets. The data are the same as the first frame displayed in Fig. 8. Reconstruction results of the (a) cross-correlation, (b) real-time plug-and-play denoiser[15], and (c) proposed method.
    1: Input:
    2: Histogram of current frame Hk, number of frames for compensation n
    3: Pre-processing step:
    4: Denoise with the threshold hk defined in Eq. (5)
    5: Estimate dk, rk from Hk using Eqs. (6) and (7)
    6: Object segmentation:
    7: Segment each object in depth and reflectivity map then project to the histogram
    8: Motion estimation:
    9: Conduct cross-correlation between Hk and Hkn (the histogram of frame k-n to frame k-1) using Eq. (8) and obtain the motion matrix Tk for all segmented objects
    10: Image reconstruction:
    11: Obtain the histogram after motion compensation H˜k=Hk+TkHk
    12: Compute d˜k and r˜k using Eqs. (6) and (7)
    13: Super-resolution step:
    14: Compute d˜kHR and r˜kHR with interpolation, self-weight median, and smoothing operations
    15: Output:
    16: d˜kHR, r˜kHR
    Table 1. Motion compensation algorithm
    Peng-Yu Jiang, Zheng-Ping Li, Wen-Long Ye, Ziheng Qiu, Da-Jian Cui, Feihu Xu. High-resolution 3D imaging through dense camouflage nets using single-photon LiDAR[J]. Advanced Imaging, 2024, 1(1): 011003
    Download Citation