• Advanced Photonics
  • Vol. 1, Issue 3, 036002 (2019)
Meng Lyu1、2, Hao Wang1、2, Guowei Li1、2, Shanshan Zheng1、2, and Guohai Situ1、2、*
Author Affiliations
  • 1Chinese Academy of Sciences, Shanghai Institute of Optics and Fine Mechanics, Shanghai, China
  • 2University of Chinese Academy of Sciences, Center for Materials Science and Optoelectronics Engineering, Beijing, China
  • show less
    DOI: 10.1117/1.AP.1.3.036002 Cite this Article Set citation alerts
    Meng Lyu, Hao Wang, Guowei Li, Shanshan Zheng, Guohai Situ. Learning-based lensless imaging through optically thick scattering media[J]. Advanced Photonics, 2019, 1(3): 036002 Copy Citation Text show less
    (a) Experimental setup for imaging through scattering media, SLM represents an amplitude-only SLM, P1 and P2 are linear polarizers and the slab is a 3-mm-thick white polystyrene. The images captured at the (b) front and (c) back surfaces of the scattering medium. (d) The side view and (e) the top view of the polystyrene.
    Fig. 1. (a) Experimental setup for imaging through scattering media, SLM represents an amplitude-only SLM, P1 and P2 are linear polarizers and the slab is a 3-mm-thick white polystyrene. The images captured at the (b) front and (c) back surfaces of the scattering medium. (d) The side view and (e) the top view of the polystyrene.
    The diagram of the proposed HNN model.
    Fig. 2. The diagram of the proposed HNN model.
    The reconstructed results. (a) The speckle patterns (64×64 pixels) cropped from the raw acquired scattered pattern (512×512 pixels), (b) the reconstructed images by using the proposed HNN, (c) the ground-truth images, and (d) the reconstructed images by using memory effect.
    Fig. 3. The reconstructed results. (a) The speckle patterns (64×64 pixels) cropped from the raw acquired scattered pattern (512×512 pixels), (b) the reconstructed images by using the proposed HNN, (c) the ground-truth images, and (d) the reconstructed images by using memory effect.
    Comparison of reconstruction performance. (a) The speckle patterns cropped from the raw scattered patterns. The images reconstructed by (b) the HNN, (c) a DNN, and (d) a CNN. (e) The ground-truth images.
    Fig. 4. Comparison of reconstruction performance. (a) The speckle patterns cropped from the raw scattered patterns. The images reconstructed by (b) the HNN, (c) a DNN, and (d) a CNN. (e) The ground-truth images.
    The positions of the three 64×64-pixel blocks randomly selected from the acquired scattered patterns of 512×512 pixels.
    Fig. 5. The positions of the three 64×64-pixel blocks randomly selected from the acquired scattered patterns of 512×512 pixels.
    The images reconstructed by using the subspeckle pattern located at A1, A2, and A3.
    Fig. 6. The images reconstructed by using the subspeckle pattern located at A1, A2, and A3.
    The images of handwritten digits and English letters reconstructed from the randomly selected pixels. (a) The subspeckle patterns formed by the randomly selected 3096 pixels from the 512×512 raw scattered patterns, (b) the reconstructed images, and (c) the ground-truth images.
    Fig. 7. The images of handwritten digits and English letters reconstructed from the randomly selected pixels. (a) The subspeckle patterns formed by the randomly selected 3096 pixels from the 512×512 raw scattered patterns, (b) the reconstructed images, and (c) the ground-truth images.
    The images of handwritten digits and English letters reconstructed from the subspeckle patterns of six different sizes.
    Fig. 8. The images of handwritten digits and English letters reconstructed from the subspeckle patterns of six different sizes.
    The result of the digits and English letters with different gray intervals. The images in the first row are the speckle images with different gray-level intervals, the second row shows the predicted objects by the HNN model, images in the third row are the ground-truth images. (G1) Images with gray value interval (25,000 to 30,000), (G2) images with gray value interval (35,000 to 40,000), (G3) images with gray value interval (40,000 to 45,000), and (G4) images with gray value threshold of 35,000.
    Fig. 9. The result of the digits and English letters with different gray intervals. The images in the first row are the speckle images with different gray-level intervals, the second row shows the predicted objects by the HNN model, images in the third row are the ground-truth images. (G1) Images with gray value interval (25,000 to 30,000), (G2) images with gray value interval (35,000 to 40,000), (G3) images with gray value interval (40,000 to 45,000), and (G4) images with gray value threshold of 35,000.
    The setup of the system to measure the optical depth.
    Fig. 10. The setup of the system to measure the optical depth.
    Meng Lyu, Hao Wang, Guowei Li, Shanshan Zheng, Guohai Situ. Learning-based lensless imaging through optically thick scattering media[J]. Advanced Photonics, 2019, 1(3): 036002
    Download Citation