• Photonics Research
  • Vol. 10, Issue 12, 2760 (2022)
He Zhang1、2, Santosh Kumar1、2, Yong Meng Sua1、2, Shenyu Zhu1、2, and Yu-Ping Huang1、2、*
Author Affiliations
  • 1Department of Physics, Stevens Institute of Technology, Hoboken, New Jersey 07030, USA
  • 2Center for Quantum Science and Engineering, Stevens Institute of Technology, Hoboken, New Jersey 07030, USA
  • show less
    DOI: 10.1364/PRJ.458813 Cite this Article Set citation alerts
    He Zhang, Santosh Kumar, Yong Meng Sua, Shenyu Zhu, Yu-Ping Huang. Near-infrared 3D imaging with upconversion detection[J]. Photonics Research, 2022, 10(12): 2760 Copy Citation Text show less

    Abstract

    We demonstrate a photon-sensitive, three-dimensional (3D) camera by active near-infrared illumination and fast time-of-flight gating. It uses picosecond pump pulses to selectively upconvert the backscattered photons according to their spatiotemporal modes via sum-frequency generation in a χ2 nonlinear crystal, which are then detected by an electron-multiplying CCD with photon sensitive detection. As such, it achieves sub-millimeter depth resolution, exceptional noise suppression, and high detection sensitivity. Our results show that it can accurately reconstruct the surface profiles of occluded targets placed behind highly scattering and lossy obscurants of 14 optical depth (round trip), using only milliwatt illumination power. This technique may find applications in biomedical imaging, environmental monitoring, and wide-field light detection and ranging.

    1. INTRODUCTION

    Three-dimensional (3D) imaging has been a long and actively pursued technology due to its important applications in medical diagnosis [13], remote sensing [4,5], facial recognition [68], environmental monitoring [9,10], and so on. A handful of techniques and realizations have thus far been demonstrated, including those based on structured-light imaging [8,11], light detection and ranging (LiDAR) with raster scanning [6,7,12,13], and stereophotogrammetry [14]. Recently, the time-correlated single-photon counting technology has been deployed to boost the detection sensitivity [1517]. In general, those active-illumination systems can generate 3D profiles of a target object with higher accuracy than those based on passive sensing.

    Meanwhile, infrared (IR) imaging and detection have been studied extensively in the past decade, which allows sensitive detection of many biomolecular and chemical signals [18,19] compared to that of visible light [20,21]. However, existing IR detection techniques are mostly based on thermal sensors, which have low sensitivity and high noise, even with cryogenic cooling [22]. On the other hand, visible detectors have much lower noise and high sensitivity without the need for cryogenic cooling. Instead of direct IR imaging, parametric frequency upconversion imaging [2325] plays a critical role for hyper-spectral IR imaging, where the IR signal is frequency upconverted into visible wavelength [20,2629] and detected by a silicon-based detector or camera with high sensitivity and low noise. Many unique nonlinear optical systems have also been developed for two-dimensional (2D) imaging, such as noiseless optical parametric amplification imaging [30,31], non-degenerate two-photon absorption [3234], and spontaneous parametric downconversion imaging [35,36]. Some are deployed to NIR or MIR imaging regimes. With parametric frequency upconversion, it has been shown that near-unity conversion efficiency can be achieved, and also can preserve the quantum characteristics of IR photons [37]. This will facilitate near-IR (NIR) or MIR imaging at a few photon levels with low dark noise [20], for example, by utilizing a sensitive detector such as silicon electron-multiplying charge-coupled device (EMCCD) to directly record upconversion photons in the visible or NIR region [24,25].

    The natural extension of the existing 2D parametric upconversion imaging system to 3D imaging can have great potential and may offer promising applications that require IR multidimensional imaging. One attempt of 3D IR imaging with a parametric upconversion process is shown in Ref. [38]. Here they used a chirped ultrashort pulse as the laser source and utilized the principle of ultrafast conversion between space, time, and frequency to obtain a single shot classical 3D image. However, the weakness of their study was due to the trade-off between the measurement range and depth resolution, as well as the low spatial resolution due to tight focusing at the crystal plane. This makes it difficult to extract critical features of the objects.

    Some common challenges in most of the backreflected imaging systems are strong background noise and multiple scattering [33,3941]. In order to strip the background noise from the contaminated signal, traditional methods such as time-frequency filters are commonly used [42,43], but they are inherently limited by the trade-off between signal detection efficiency and noise rejection. Realizing this, quantum parametric mode sorting has been proposed and demonstrated [4345]. Some other optical techniques are also present in the literature to address imaging through multi-scattered media. Most of these techniques can be broadly classified as relying on ballistic photons [39,46], diffuse optical tomography [47,48], etc. Those techniques offer potential applications for medical imaging, communications, and security [2,49]. However, most studies in the field of overcoming strong background noise and multiple scattering have focused on 2D imaging. In the past, quantum parametric mode sorting has been utilized for single-pixel 3D single photon imaging systems in photon-starved, noise-polluted environments as well as imaging through a strongly scattering medium [7,50]. With raster scanning methods, it demonstrated imaging of a 3D object through a highly reflective obscuring scene with 36 dB advantage in noise rejection. However, the requirement of raster scanning has severely impeded the image acquisition speed while the traverse spatial resolution of the 3D image is limited by the field of view of the single-pixel detector [51].

    In this paper, we extend these promising studies and explore an active NIR 3D imaging system using EMCCD with upconversion detection through a highly scattered medium amid strong spatiotemporal background noise. Our system is based on the nonlinear frequency upconversion process via sum-frequency (SF) generation of time-correlated pump and signal optical pulses. Combined with single-photon sensitive EMCCD, we capture spatial and temporal information of the scene of interest, which can be used to reconstruct 3D surface profiles of the target object. To surmount the imaging through a strong scattering medium, we utilize time-resolving photon detection by detecting the frequency upconversion image confined temporally within the time window of the pump pulse, which shows excellent behavior in noise rejection. Compared with the raster scanning methods [7,12,13], in our work, we could significantly improve the spatial resolution up to 48 μm and effectively reduce the 3D image acquisition time. Besides, we could reconstruct the 3D profile of the target object through strong spatiotemporal background noise (SNR about 20  dB). This system can be deployed in applications that require ultra-sensitive imaging, such as medical diagnostics and quantum optics at the single- or few-photon level [52,53], and it can find values in biomedical imaging, non-destructive label free diagnosis, and quantum communications. In the future, selective 3D edge enhancements can be implemented by imprinting the spatial phase patterns on the pump beam using a spatial light modulator (SLM) [54,55].

    2. EXPERIMENT SETUP

    The nonlinear optical setup for the 3D image reconstructor is shown in Fig. 1(a). The signal and pump pulses are derived from a 50 MHz femtosecond mode-locked laser (MLL) using two inline narrowband wavelength division multiplexers (WDM) of 0.8 nm linewidth to pick two separate wavelengths, one at 1545 nm as the signal and another at 1558 nm as the pump. The pump pulses are sent to a programmable optical delay line (ODL) and then guided into free space with collimated beam size of 0.65 mm FWHM. The collimated signal beam is magnified by a beam expander to 10.8 mm FWHM. The intensity of the horizontally polarized signal beam is tuned by the combination of half- and quarter-wave plates (HWPs and QWPs) with a polarizing beam splitter (PBS). The signal light, after PBS, then passes through another QWP and scattering media before incident on the object. The backreflected or scattered light from the object then changes the polarization and passes through a telescope with lenses FL1 (focal length 300 mm) and FL2 (focal length 25 mm), which reduce the beam size to 0.53  mm FWHM. After that, a beam splitter (BS) combines the collimated signal and pump beams that are incident into a temperature-stabilized PPLN crystal with the poling period of 19.36 μm [5% (mole fraction) MgO doped PPLN, 2 mm length, 3 mm width, and 1 mm height from HC Photonics] for the frequency conversion process. The normalized conversion efficiency in our case is 9×104%/W, which is restricted by three factors: (a) low pump power, (b) short crystal length (2 mm), and (c) being not in the optimum focusing condition for signal and pump (both are collimated beams) inside the crystal. The 4f system after the crystal consists of two Fourier lenses with focal length 25 mm (FL3) and 100 mm (FL4), imaging the SF output onto an EMCCD (iXon Ultra 897, Andor) with 512×512 pixels and 16 μm pixel size. The quantum efficiency of this EMCCD is measured to be 7.5% at upconversion wavelength, calibrated against a silicon avalanche photodiode at the mean photon number 0.01 per pulse. On the other output of the BS, an IR camera (FIND-R-SCOPE Model No. 85700) is used to capture the image via direct signal detection.

    (a) Experiment setup. Mode-locked laser pulses are separated into two arms by using WDM filters, with signal and pump wavelengths at 1545 and 1558 nm, respectively. The signal beam is incident on an obscured object. The backscattered signal photons are combined with the pump and are then upconverted in a nonlinear crystal to generate SF output centered at wavelength 775.5 nm. The time-resolved measurements can faithfully reconstruct the 3D object image captured by the EMCCD camera. (b) Picture of the object (a washer) attached on an aluminum block. (c) Picture of the obscured object, i.e., the washer obscured by the scattering media (SM). WDM, wavelength division multiplexer; EDFA, erbium-doped fiber amplifier; QWP, quarter-wave plate; HWP, half-wave plate; BS, beam splitter; FL, Fourier lens; PPLN crystal, magnesium-doped periodically poled lithium niobate crystal; EMCCD, electron multiplying silicon charge coupled device; ASE, amplified spontaneous emission.

    Figure 1.(a) Experiment setup. Mode-locked laser pulses are separated into two arms by using WDM filters, with signal and pump wavelengths at 1545 and 1558 nm, respectively. The signal beam is incident on an obscured object. The backscattered signal photons are combined with the pump and are then upconverted in a nonlinear crystal to generate SF output centered at wavelength 775.5 nm. The time-resolved measurements can faithfully reconstruct the 3D object image captured by the EMCCD camera. (b) Picture of the object (a washer) attached on an aluminum block. (c) Picture of the obscured object, i.e., the washer obscured by the scattering media (SM). WDM, wavelength division multiplexer; EDFA, erbium-doped fiber amplifier; QWP, quarter-wave plate; HWP, half-wave plate; BS, beam splitter; FL, Fourier lens; PPLN crystal, magnesium-doped periodically poled lithium niobate crystal; EMCCD, electron multiplying silicon charge coupled device; ASE, amplified spontaneous emission.

    3. RESULTS

    First, we evaluate the transverse spatial resolution of the 3D imager with two different experimental settings, as shown in Fig. 2. A 1951 USAF resolution test chart (USAF-RTC) is used at the object position to test the spatial resolution of this system. There are 54 target elements provided in the USAF-RTC, and each element consists of three bars that are separated by the bar width. We define the feature size as the width of the bar in the USAF-RTC, which is half the distance between the centers of the two bars. The spatial resolution result shown in Fig. 2(a) is the upconversion image of group 1 in USAF-RTC, which was obtained from the experimental setup depicted in Fig. 1(a). The intensity versus position line plot shown in Fig. 2(b) reflects the one-dimensional spatially resolved blue dashed line in Fig. 2(a). It shows that our setup is able to easily resolve features with size 144  μm, which is also consistent with its actual value 140.31 μm (group 1, element 6). The decrease in the intensity at the edges is due to the Gaussian intensity distribution of the probing signal beam and the pump beam. We have validated that the size of the spot hitting on the object and the subsequent imaging system affect the final resolvable resolution of our 3D imager system since they affect the point spread function of the backreflected signal from the object. So, another version of the experiment setup has been constructed, whose spatial resolution result is shown in Fig. 2(c), which is the upconversion image of group 3 in USAF-RTC. In this case, the signal beam size is reduced to 3.6 mm FWHM. Besides, the Fourier lenses of the first 4f system are replaced as FL1 (focal length 100 mm) and FL2 (focal length 25 mm). The intensity profile along the red dashed line in Fig. 2(c) is shown in Fig. 2(d), where we can easily resolve objects with spatial resolution 48 μm. This feature size is matched with its actual value 49.37 μm (group 3, element 5). In this case, the resolvable feature size is improved, but the field of view from the observed object is reduced. In order to image bigger objects, we present the following results in this work using the setup shown in Fig. 1(a), with the spatial resolution of 144 μm.

    Field of view and spatial resolution of upconverted images using two different experimental situations. (a) Upconversion image of group 1 in USAF resolution test chart from the experiment setup depicted in Fig. 1(a). (b) Intensity profile along the blue dashed line in (a). (c) Upconversion image of group 3 in USAF resolution test chart with another experimental setting. In this case, the signal beam size is reduced to 3.6 mm FWHM and the first Fourier lens (FL) of the object 4f imaging system is changed to focal length of 100 mm. (d) Intensity profile along the red dashed line in (c).

    Figure 2.Field of view and spatial resolution of upconverted images using two different experimental situations. (a) Upconversion image of group 1 in USAF resolution test chart from the experiment setup depicted in Fig. 1(a). (b) Intensity profile along the blue dashed line in (a). (c) Upconversion image of group 3 in USAF resolution test chart with another experimental setting. In this case, the signal beam size is reduced to 3.6 mm FWHM and the first Fourier lens (FL) of the object 4f imaging system is changed to focal length of 100 mm. (d) Intensity profile along the red dashed line in (c).

    Our 3D object image reconstruction technique utilizes the spatial and temporal photon information enabled by the nonlinear frequency conversion process. Figure 3 shows an example of the 3D reconstruction process. In Fig. 3(a1), the blue curve indicates the normalized photon intensity as a function of the arrival time for the reflected/backscattered signal photons from the object to reach the crystal, and the red solid curve represents the pump pulse arrival time. By sweeping the ODL, the pump pulses are swept along the temporal domain, which can overlap with the returning signal pulses at a certain arrival time. These two overlapped pulses interact inside the PPLN crystal, and the SF light is generated as shown in Fig. 3(a2). The signal can be upconverted efficiently only if it is spatially and temporally overlapping with the pump. At each delay step, the SF image was captured to reconstruct the depth z (=c×t/2) of the 3D object. Therefore, the optical delay time indicates the arrival time of the backscattered signal photons. In Figs. 3(a3)–3(a5), three SF images were collected at optical delay time t=75  ps, t=105  ps, and t=140  ps, respectively. The collected data are a 3D data set shown in Fig. 3(a6). Each pixel has one histogram of the photon’s arrival time versus the counts. After converting the photon’s arrival time into distance, we show two histograms for two different pixels in Fig. 3(a6). The peak value (z1 and z2) of the curve provides the information of relative depth for each pixel on the object. By post-processing the data, the reconstructed image with depth information is shown in Fig. 3(a7) followed by applying a median filter (4×4 pixels) to smoothen the reconstructed image. In the experiment, a washer and a bolt are used as the target objects to perform the measurement. The dimensional measurements of the washer and the bolt are shown in Figs. 3(b1) and 3(c1), respectively. The results of the reconstructed 3D images are shown in Figs. 3(b2) and 3(c2). The X axis and Y axis give the cross section of the target object in millimeters, and Z axis shows the depth information of the target object in millimeters. In Fig. 3(b2), the measured outer diameter of the reconstructed washer is 7.92 mm, the inner diameter is 3.65 mm, and the depth is 1.6 mm. In Fig. 3(c2), the measured stub height and diameter of reconstructed bolt are 5.9 and 2.3 mm, and the height and the diameter of the bottom parts are 3.2 and 6.45 mm, respectively. We set the EMCCD exposure time of each image to 1 s. The total acquisition time for a full 3D image reconstruction is 30  s. The reconstructed 3D images in Figs. 3(b2) and 3(c2) agree well with the ground truth shown in Figs. 3(b1) and 3(c1).

    (a1)–(a7) illustrate the present 3D imaging method. (a1) presents the intensity measurement for the input signal and pump at different arrival times, with the corresponding SF intensity shown in (a2). (a3)–(a5) show the spatial information at different arrival time I, II, and III, respectively. (a6) shows the reconstruction procedure of the 3D data set collection. At each pump delay, 2D image data are acquired by the camera in each frame, and the z dimension represents the photon’s flight distance. (a7) shows post-processing data of (a6), in which the z axis gives the depth information of the target object. The object photos with profile data for the washer and bolt are shown in (b1) and (c1), respectively. After performing 3D imaging measurement, the results for washer and bolt are shown in (b2) and (c2), respectively.

    Figure 3.(a1)–(a7) illustrate the present 3D imaging method. (a1) presents the intensity measurement for the input signal and pump at different arrival times, with the corresponding SF intensity shown in (a2). (a3)–(a5) show the spatial information at different arrival time I, II, and III, respectively. (a6) shows the reconstruction procedure of the 3D data set collection. At each pump delay, 2D image data are acquired by the camera in each frame, and the z dimension represents the photon’s flight distance. (a7) shows post-processing data of (a6), in which the z axis gives the depth information of the target object. The object photos with profile data for the washer and bolt are shown in (b1) and (c1), respectively. After performing 3D imaging measurement, the results for washer and bolt are shown in (b2) and (c2), respectively.

    Next, we test the performance of our technique through scattering media. The scattering media are made from epoxy resin and titanium oxide (TiO2) pigment (220  nm particle size). We examine two different pieces of scattering media (SM1 and SM2) in our setup. The thickness, mean free path (ls), and optical depth of the scattered media are shown in Table 1. SM1 with thickness 4.3 mm and optical depth 7.29ls (14.58ls round trip) is more scattered compared to SM2 with thickness 3.2 mm and optical depth 4.04ls (8.08ls round trip). In our case, the scattering media are placed in front of the object, so the signal propagates twice through the scattering media before upconversion detection. The backreflected photons coming from the different surface of the target object are upconverted in different time intervals using the pump pulse time gating. It could allow us to isolate the other backscattered noise photons arriving in different time intervals. Figure 4 shows the 3D image reconstruction of the target objects, bolt [Figs. 4(a) and 4(c)] and washer [Figs. 4(b) and 4(d)] through scattering media (SM1 and SM2), respectively. To effectively reconstruct the 3D image, we used a time windowing procedure to post-select several continuous EMCCD images that are captured at different temporal delays of the pump pulse. The time window could discard most of the background noise coming from the scattering and process only those within the time window defined by the pump pulses, following the procedure shown in Figs. 3(a6)–3(a7). This procedure is programmed via MATLAB to improve the SNR of the reconstructed 3D image. It reduces the speckle noise induced by the scattering medium and, thus, better resolves the shape of the target object. The reconstructed images of the bolt through SM1 without and with time windowing are shown in Figs. 4(a1) and 4(a2), respectively. Here, the edge and depth variation of the cap bolt can be distinguished clearly. The washer images without and with temporal windowing are also reconstructed through SM1 in Figs. 4(b1) and 4(b2), respectively. It can also reconstruct the image of the washer. Similar results are shown in the third row of Fig. 4 for relatively weaker scattering media (SM2). In both cases, we can effectively reconstruct the 3D image of the target object by carving the temporal window.

    Parameters of Scattering Media

     SampleMean Free Path (ls)Optical Depth (Round Trip)
    SM14.3 mm0.58 mm14.58ls
    SM23.2 mm0.66 mm8.08ls

    3D reconstructed image through scattering media. Two SM samples are used (Table 1); the results of SM1 with optical depth 14.58ls for double passes are shown in the second row, and the results of SM2 with optical depth 8.08ls are shown in the third row. The time window can partially discard the redundant noise in the temporal scans and improve the reconstructed image contrast.

    Figure 4.3D reconstructed image through scattering media. Two SM samples are used (Table 1); the results of SM1 with optical depth 14.58ls for double passes are shown in the second row, and the results of SM2 with optical depth 8.08ls are shown in the third row. The time window can partially discard the redundant noise in the temporal scans and improve the reconstructed image contrast.

    After that, we inject the amplified spontaneous emission (ASE) noise that is temporally and spectrally overlapping with the signal, as shown in Fig. 1(a). This noise is generated from an erbium-doped fiber amplifier (EDFA). Both the signal and ASE noise pass twice through SM2 before the upconversion detection. To ensure that the ASE noise is in the same time-frequency and spatial profile, we choose the same WDM filter bandwidth and spatial beam size as the signal. In this measurement, we use the washer in Fig. 3(b1) as the target object. Figure 5(a) shows the image of the signal mixed with the temporal noise before upconversion, as taken by the IR camera. At a low SNR (about 20  dB), it is impossible to reconstruct the image of the target object with direct detection. Yet, our system captures an image with up to 8×107 converted photons per second using the EMCCD as shown in Fig. 5(b). By time windowing, the noise is effectively suppressed, and the reconstructed washer image gives 16  dB improvement in the SNR. Figure 5(c) shows the 3D reconstruction image of the target object by temporally scanning the pump pulses. The exposure time of the EMCCD is the same as the other previous measurements (=1  s). This result clearly demonstrates the noise rejection advantage of our system.

    3D image reconstruction through addition noises in the time-frequency and spatial domain. (a) Signal image before upconversion captured using the IR camera, which is reflected from the target object and mixed with time-frequency and spatial noises. (b) Upconverted SF image at a certain arrival time, which can be captured using the EMCCD. (c) Reconstructed 3D image.

    Figure 5.3D image reconstruction through addition noises in the time-frequency and spatial domain. (a) Signal image before upconversion captured using the IR camera, which is reflected from the target object and mixed with time-frequency and spatial noises. (b) Upconverted SF image at a certain arrival time, which can be captured using the EMCCD. (c) Reconstructed 3D image.

    We now briefly discuss the effects of the EMCCD exposure time on the reconstructed 3D images. Figure 6 shows the results for three different exposure times: 0.1 s, 1 s, and 2 s. As the exposure time increases, the 3D image of the washer can be easily recognizable with better contrast. For the exposure time of 2 s on EMCCD, the total acquisition time in reconstructing a 3D image is 60  s, which is considerably long. Yet, as the above improvement comes from the increased number of photons collected by the EMCCD, one can instead increase the nonlinear conversion efficiency or use detector arrays with higher quantum efficiency to enhance the image contrast while reducing the acquisition time.

    Reconstructed 3D image of washer for a series of exposure times: (a) 0.1 s, (b) 1 s, and (c) 2 s.

    Figure 6.Reconstructed 3D image of washer for a series of exposure times: (a) 0.1 s, (b) 1 s, and (c) 2 s.

    Thus far, we have used highly reflective metal objects as our testing targets. To further assess the capability of our photon sensitive 3D imager for general objects, we now switch to a target with a diffusive surface, shown in Fig. 7(a). This diffusive surface, a digit “3,” is homemade using PLA plastic filament by a 3D printer. For 3D reconstruction of this object, we set the EMCCD exposure time to 2 s. Figure 7(b) shows the reconstructed 3D image without any scattered media or ASE noise. It shows that the 3D reconstructed image is clearly recovered. When we add the same scattering media and ASE noise as used in Fig. 5, our 3D imager performance degrades. Nonetheless, we are still able to recover the object by properly choosing the temporal window. Figure 7(c) presents the results with the full time scanning, while Fig. 7(d) gives the manicured data by carving the temporal window. In these cases, we need to increase the EMCCD exposure time to 10 s to collect more photons. As seen, the unrecognizable 3D image in Fig. 7(c) can be well recovered in Fig. 7(d) by properly tuning the temporal window.

    3D imaging measurement for the target object with diffusive surface. (a) Photo for target object. (b) Reconstructed image without scattering media. (c) Reconstructed image of the object with scattering media placed in front of it. (d) Post-processing image of (c).

    Figure 7.3D imaging measurement for the target object with diffusive surface. (a) Photo for target object. (b) Reconstructed image without scattering media. (c) Reconstructed image of the object with scattering media placed in front of it. (d) Post-processing image of (c).

    4. CONCLUSION

    We have experimentally demonstrated a high-performance 3D imager for photon sensitive detection using optical frequency upconversion pumped by picosecond pulses. It obtains millimeter depth resolution and 140 μm spatial resolution, while effectively rejecting background noise from ambient environment and obscurants. As such, the present technique could potentially find applications in biomedical imaging, remote sensing over low visibility. On the other hand, in the current method, the acquisition time of photon sensitive imaging is longer than what is needed for typical real-time target object identification. This shortcoming is mainly due to the low conversion efficiency of the current nonlinear process, which can be increased by using a high-power laser or a longer nonlinear crystal. Also, to improve the imaging sensitivity through different kinds of scattering materials, one could use spatially modulated pump beams to further improve the 3D image contrast, similar to what has been demonstrated in 2D imaging cases [54,55].

    References

    [1] K. H. Kim, J. A. Burns, J. J. Bernstein, G. N. Maguluri, B. H. Park, J. F. de Boer. In vivo 3D human vocal fold imaging with polarization sensitive optical coherence tomography and a MEMS scanning catheter. Opt. Express, 18, 14644-14653(2010).

    [2] G. Satat, B. Heshmat, D. Raviv, R. Raskar. All photons imaging through volumetric scattering. Sci. Rep., 6, 33946(2016).

    [3] J. Shi, T. T. W. Wong, Y. He, L. Li, R. Zhang, C. S. Yung, J. Hwang, K. Maslov, L. V. Wang. High-resolution, high-contrast mid-infrared imaging of fresh biological samples with ultraviolet-localized photoacoustic microscopy. Nat. Photonics, 13, 609-615(2019).

    [4] K. N. Liou, S.-C. Ou, Y. Takano, J. Cetola. Remote sensing of three-dimensional cirrus clouds from satellites: application to continuous-wave laser atmospheric transmission and backscattering. Appl. Opt., 45, 6849-6859(2006).

    [5] A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, G. S. Buller. Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection. Opt. Express, 21, 8904-8915(2013).

    [6] A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, V. K. Goyal. First-photon imaging. Science, 343, 58-61(2014).

    [7] P. Rehain, Y. M. Sua, S. Zhu, I. Dickson, B. Muthuswamy, J. Ramanathan, A. Shahverdi, Y.-P. Huang. Noise-tolerant single photon sensitive three-dimensional imager. Nat. Commun., 11, 921(2020).

    [8] J. Xue, Q. Zhang, C. Li, W. Lang, M. Wang, Y. Hu. 3D face profilometry based on galvanometer scanner with infrared fringe projection in high speed. Appl. Sci., 9, 1458(2019).

    [9] P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, J. T. Thielemann. Range-gated imaging system for underwater monitoring in ocean environment. Sustainability, 11, 162(2019).

    [10] C. Ren, G. Cong, L. Lei, H. Jianwen, D. Lei. A 3D space-ground atmospheric observation system. Optical Sensors and Sensing Congress (ES, FTS, HISE, Sensors), FTu2B.3(2019).

    [11] Y. Ko, S. Yi. Development of color 3D scanner using laser structured-light imaging method. Curr. Opt. Photon., 2, 554-562(2018).

    [12] M. Imaki, T. Kobayashi. Infrared frequency upconverter for high-sensitivity imaging of gas plumes. Opt. Lett., 32, 1923-1925(2007).

    [13] Y.-P. Tseng, P. Bouzy, C. Pedersen, N. Stone, P. Tidemand-Lichtenberg. Upconversion raster scanning microscope for long-wavelength infrared imaging of breast cancer microcalcifications. Biomed. Opt. Express, 9, 4979-4987(2018).

    [14] C. L. Heike, K. Upson, E. Stuhaug, S. M. Weinberg. 3D digital stereophotogrammetry: a practical guide to facial image acquisition. Head Face Med., 6, 18(2010).

    [15] J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Umasuthan, A. M. Wallace. Time-of-flight optical ranging system based on time-correlated single-photon counting. Appl. Opt., 37, 7298-7304(1998).

    [16] R. Tobin, A. Halimi, A. McCarthy, P. J. Soan, G. S. Buller. Robust real-time 3D imaging of moving scenes through atmospheric obscurant using single-photon LiDAR. Sci. Rep., 11, 11236(2021).

    [17] J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, S. McLaughlin. Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers. Nat. Commun., 10, 4984(2019).

    [18] H. Amrania, G. Antonacci, C.-H. Chan, L. Drummond, W. R. Otto, N. A. Wright, C. Phillips. Digistain: a digital staining instrument for histopathology. Opt. Express, 20, 7290-7299(2012).

    [19] J. Nallala, O. Piot, M.-D. Diebold, C. Gobinet, O. Bouché, M. Manfait, G. D. Sockalingum. Infrared and Raman imaging for characterizing complex biological materials: a comparative morpho-spectroscopic study of colon tissue. Appl. Spectrosc., 68, 57-68(2014).

    [20] J. S. Dam, P. Tidemand-Lichtenberg, C. Pedersen. Room-temperature mid-infrared single-photon spectral imaging. Nat. Photonics, 6, 788-793(2012).

    [21] S.-K. Liu, C. Yang, S.-L. Liu, Z.-Y. Zhou, Y. Li, Y.-H. Li, Z.-H. Xu, G.-C. Guo, B.-S. Shi. Up-conversion imaging processing with field-of-view and edge enhancement. Phys. Rev. Appl., 11, 044013(2019).

    [22] A. Rogalski. Infrared detectors: an overview. Infrared Phys. Technol., 43, 187-210(2002).

    [23] G. W. Faris, M. Banks. Upconverting time gate for imaging through highly scattering media. Opt. Lett., 19, 1813-1815(1994).

    [24] A. Barh, P. J. Rodrigo, L. Meng, C. Pedersen, P. Tidemand-Lichtenberg. Parametric upconversion imaging and its applications. Adv. Opt. Photon., 11, 952-1019(2019).

    [25] Q. Zhou, K. Huang, H. Pan, E. Wu, H. Zeng. Ultrasensitive mid-infrared up-conversion imaging at few-photon level. Appl. Phys. Lett., 102, 241110(2013).

    [26] R. Demur, R. Garioud, A. Grisard, E. Lallier, L. Leviandier, L. Morvan, N. Treps, C. Fabre. Near-infrared to visible upconversion imaging using a broadband pump laser. Opt. Express, 26, 13252-13263(2018).

    [27] S. Junaid, S. C. Kumar, M. Mathez, M. Hermes, N. Stone, N. Shepherd, M. Ebrahim-Zadeh, P. Tidemand-Lichtenberg, C. Pedersen. Video-rate, mid-infrared hyperspectral upconversion imaging. Optica, 6, 702-708(2019).

    [28] Y. Wang, J. Fang, T. Zheng, Y. Liang, Q. Hao, E. Wu, M. Yan, K. Huang, H. Zeng. Mid-infrared single-photon edge enhanced imaging based on nonlinear vortex filtering. Laser Photon. Rev., 15, 2100189(2021).

    [29] K. Huang, Y. Wang, J. Fang, W. Kang, Y. Sun, Y. Liang, Q. Hao, M. Yan, H. Zeng. Mid-infrared photon counting and resolving via efficient frequency upconversion. Photon. Res., 9, 259-265(2021).

    [30] S.-K. Choi, M. Vasilyev, P. Kumar. Noiseless optical amplification of images. Phys. Rev. Lett., 83, 1938-1941(1999).

    [31] G. Frascella, R. V. Zakharov, O. V. Tikhonova, M. V. Chekhova. Multimode optical parametric amplification in the phase-sensitive regime. Opt. Lett., 46, 2364-2367(2021).

    [32] D. Knez, A. M. Hanninen, R. C. Prince, E. O. Potma, D. A. Fishman. Infrared chemical imaging through non-degenerate two-photon absorption in silicon-based cameras. Light Sci. Appl., 9, 125(2020).

    [33] W. Liu, Z. Zhou, L. Chen, X. Luo, Y. Liu, X. Chen, W. Wan. Imaging through dynamical scattering media by two-photon absorption detectors. Opt. Express, 29, 29972-29981(2021).

    [34] E. O. Potma, D. Knez, Y. Chen, Y. Davydova, A. Durkin, A. Fast, M. Balu, B. Norton-Baker, R. W. Martin, T. Baldacchini, D. A. Fishman. Rapid chemically selective 3D imaging in the mid-infrared. Optica, 8, 995-1002(2021).

    [35] A. V. Paterova, S. M. Maniam, H. Yang, G. Grenci, L. A. Krivitsky. Hyperspectral infrared microscopy with visible light. Sci. Adv., 6, eabd0460(2020).

    [36] M. Gilaberte Basset, F. Setzpfandt, F. Steinlechner, E. Beckert, T. Pertsch, M. Gräfe. Perspectives for applications of quantum imaging. Laser Photon. Rev., 13, 1900097(2019).

    [37] J. Huang, P. Kumar. Observation of quantum frequency conversion. Phys. Rev. Lett., 68, 2153-2156(1992).

    [38] Y. Tanaka, T. Kato, M. Uchida, A. Asahara, K. Minoshima. High-resolution no-scanning 3D image detection using sum-frequency generation of chirped optical frequency combs. CLEO Pacific Rim Conference, Th1C.5(2018).

    [39] C. Dunsby, P. M. W. French. Techniques for depth-resolved imaging through turbid media including coherence-gated imaging. J. Phys. D, 36, R207-R227(2003).

    [40] T. Wu, J. Dong, X. Shao, S. Gigan. Imaging through a thin scattering layer and jointly retrieving the point-spread-function using phase-diversity. Opt. Express, 25, 27182-27194(2017).

    [41] Y. Shi, Y. Liu, W. Sheng, J. Wang, T. Wu. Speckle rotation decorrelation based single-shot video through scattering media. Opt. Express, 27, 14567-14576(2019).

    [42] Y. Shin, S.-W. Nam, C.-K. An, E. Powers. Design of a time-frequency domain matched filter for detection of non-stationary signals. IEEE International Conference on Acoustics, Speech, and Signal Processing, 6, 3585-3588(2001).

    [43] A. Shahverdi, Y. M. Sua, L. Tumeh, Y.-P. Huang. Quantum parametric mode sorting: beating the time-frequency filtering. Sci. Rep., 7, 6495(2017).

    [44] A. Shahverdi, Y. M. Sua, I. Dickson, M. Garikapati, Y.-P. Huang. Mode selective up-conversion detection for lidar applications. Opt. Express, 26, 15914-15923(2018).

    [45] H. Zhang, S. Kumar, Y.-P. Huang. Mode selective up-conversion detection with turbulence. Sci. Rep., 9, 17481(2019).

    [46] J. Cho, S. Kang, B. Lee, J. Moon, Y.-S. Lim, M. Jang, W. Choi. Time-resolved detection of early-arriving ballistic waves in a quasi-diffusive regime. Opt. Express, 29, 35640-35650(2021).

    [47] H. Yılmaz, M. Kühmayer, C. W. Hsu, S. Rotter, H. Cao. Customizing the angular memory effect for scattering media. Phys. Rev. X, 11, 031010(2021).

    [48] J. Radford, A. Lyons, F. Tonolini, D. Faccio. Role of late photons in diffuse optical imaging. Opt. Express, 28, 29486-29495(2020).

    [49] O. Katz, P. Heidmann, M. Fink, S. Gigan. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics, 8, 784-790(2014).

    [50] S. Maruca, P. Rehain, Y. M. Sua, S. Zhu, Y. Huang. Non-invasive single photon imaging through strongly scattering media. Opt. Express, 29, 9981-9990(2021).

    [51] Z.-P. Li, X. Huang, P.-Y. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, J.-W. Pan. Super-resolution single-photon imaging at 8.2 kilometers. Opt. Express, 28, 4076-4087(2020).

    [52] G. Brida, M. Genovese, I. Ruo Berchera. Experimental realization of sub-shot-noise quantum imaging. Nat. Photonics, 4, 227-230(2010).

    [53] E. D. Lopaeva, I. Ruo Berchera, I. P. Degiovanni, S. Olivares, G. Brida, M. Genovese. Experimental realization of quantum illumination. Phys. Rev. Lett., 110, 153603(2013).

    [54] X. Qiu, F. Li, W. Zhang, Z. Zhu, L. Chen. Spiral phase contrast imaging in nonlinear optics: seeing phase objects using invisible illumination. Optica, 5, 208-212(2018).

    [55] Z. Li, Z. Yu, H. Hui, H. Li, T. Zhong, H. Liu, P. Lai. Edge enhancement through scattering media enabled by optical wavefront shaping. Photon. Res., 8, 954-962(2020).

    He Zhang, Santosh Kumar, Yong Meng Sua, Shenyu Zhu, Yu-Ping Huang. Near-infrared 3D imaging with upconversion detection[J]. Photonics Research, 2022, 10(12): 2760
    Download Citation