Journals
Advanced Photonics
Photonics Insights
Advanced Photonics Nexus
Photonics Research
Advanced Imaging
View All Journals
Chinese Optics Letters
High Power Laser Science and Engineering
Articles
Optics
Physics
Geography
View All Subjects
Conferences
CIOP
HPLSE
AP
View All Events
News
About CLP
Search by keywords or author
Login
Registration
Login in
Registration
Search
Search
Articles
Journals
News
Advanced Search
Top Searches
laser
the
2D Materials
Transformation optics
Quantum Photonics
Home
About
Issue in Progress
Current Issue
Special Issues
All Issues
Special Events
Journals >
>
Topics >
Image Processing and Image Analysis
Contents
Image Processing and Image Analysis
|
16 Article(s)
Positive influence of the scattering medium on reflective ghost imaging
Qin Fu, Yanfeng Bai, Xianwei Huang, Suqin Nan, Peiyi Xie, and Xiquan Fu
The scattering medium is usually thought to have a negative effect on the imaging process. In this paper, it is shown that the imaging quality of reflective ghost imaging (GI) in the scattering medium can be improved effectively when the binary method is used. By the experimental and the numerical results, it is proved that the existence of the scattering medium is just the cause of this phenomenon, i.e., the scattering medium has a positive effect on the imaging quality of reflective GI. During this process, the effect from the scattering medium behaves as the random noise which makes the imaging quality of binary ghost imaging have an obvious improvement.
The scattering medium is usually thought to have a negative effect on the imaging process. In this paper, it is shown that the imaging quality of reflective ghost imaging (GI) in the scattering medium can be improved effectively when the binary method is used. By the experimental and the numerical results, it is proved that the existence of the scattering medium is just the cause of this phenomenon, i.e., the scattering medium has a positive effect on the imaging quality of reflective GI. During this process, the effect from the scattering medium behaves as the random noise which makes the imaging quality of binary ghost imaging have an obvious improvement.
showLess
Photonics Research
Publication Date: Nov. 27, 2019
Vol. 7, Issue 12, 12001468 (2019)
Get PDF
View fulltext
Learning-based phase imaging using a low-bit-depth pattern
Zhenyu Zhou, Jun Xia, Jun Wu, Chenliang Chang, Xi Ye, Shuguang Li, Bintao Du, Hao Zhang, and Guodong Tong
Phase imaging always deals with the problem of phase invisibility when capturing objects with existing light sensors. However, there is a demand for multiplane full intensity measurements and iterative propagation process or reliance on reference in most conventional approaches. In this paper, we present an end-to-end compressible phase imaging method based on deep neural networks, which can implement phase estimation using only binary measurements. A thin diffuser as a preprocessor is placed in front of the image sensor to implicitly encode the incoming wavefront information into the distortion and local variation of the generated speckles. Through the trained network, the phase profile of the object can be extracted from the discrete grains distributed in the low-bit-depth pattern. Our experiments demonstrate the faithful reconstruction with reasonable quality utilizing a single binary pattern and verify the high redundancy of the information in the intensity measurement for phase recovery. In addition to the advantages of efficiency and simplicity compared to now available imaging methods, our model provides significant compressibility for imaging data and can therefore facilitate the low-cost detection and efficient data transmission.
Phase imaging always deals with the problem of phase invisibility when capturing objects with existing light sensors. However, there is a demand for multiplane full intensity measurements and iterative propagation process or reliance on reference in most conventional approaches. In this paper, we present an end-to-end compressible phase imaging method based on deep neural networks, which can implement phase estimation using only binary measurements. A thin diffuser as a preprocessor is placed in front of the image sensor to implicitly encode the incoming wavefront information into the distortion and local variation of the generated speckles. Through the trained network, the phase profile of the object can be extracted from the discrete grains distributed in the low-bit-depth pattern. Our experiments demonstrate the faithful reconstruction with reasonable quality utilizing a single binary pattern and verify the high redundancy of the information in the intensity measurement for phase recovery. In addition to the advantages of efficiency and simplicity compared to now available imaging methods, our model provides significant compressibility for imaging data and can therefore facilitate the low-cost detection and efficient data transmission.
showLess
Photonics Research
Publication Date: Sep. 28, 2020
Vol. 8, Issue 10, 10001624 (2020)
Get PDF
View fulltext
Wide-field ophthalmic space-division multiplexing optical coherence tomography
Jason Jerwick, Yongyang Huang, Zhao Dong, Adrienne Slaudades, Alexander J. Brucker, and Chao Zhou
High-speed ophthalmic optical coherence tomography (OCT) systems are of interest because they allow rapid, motion-free, and wide-field retinal imaging. Space-division multiplexing optical coherence tomography (SDM-OCT) is a high-speed imaging technology that takes advantage of the long coherence length of microelectromechanical vertical cavity surface emitting laser sources to multiplex multiple images along a single imaging depth. We demonstrate wide-field retinal OCT imaging, acquired at an effective A-scan rate of 800,000 A-scans/s with volumetric images covering up to 12.5 mm×7.4 mm on the retina and captured in less than 1 s. A clinical feasibility study was conducted to compare the ophthalmic SDM-OCT with commercial OCT systems, illustrating the high-speed capability of SDM-OCT in a clinical setting.
High-speed ophthalmic optical coherence tomography (OCT) systems are of interest because they allow rapid, motion-free, and wide-field retinal imaging. Space-division multiplexing optical coherence tomography (SDM-OCT) is a high-speed imaging technology that takes advantage of the long coherence length of microelectromechanical vertical cavity surface emitting laser sources to multiplex multiple images along a single imaging depth. We demonstrate wide-field retinal OCT imaging, acquired at an effective A-scan rate of 800,000 A-scans/s with volumetric images covering up to 12.5 mm×7.4 mm on the retina and captured in less than 1 s. A clinical feasibility study was conducted to compare the ophthalmic SDM-OCT with commercial OCT systems, illustrating the high-speed capability of SDM-OCT in a clinical setting.
showLess
Photonics Research
Publication Date: Mar. 31, 2020
Vol. 8, Issue 4, 04000539 (2020)
Get PDF
View fulltext
3D Hessian deconvolution of thick light-sheet z-stacks for high-contrast and high-SNR volumetric imaging
Zhe Zhang, Dongzhou Gou, Fan Feng, Ruyi Zheng, Ke Du, Hongrun Yang, Guangyi Zhang, Huitao Zhang, Louis Tao, Liangyi Chen, and Heng Mao
Due to its ability of optical sectioning and low phototoxicity, z-stacking light-sheet microscopy has been the tool of choice for in vivo imaging of the zebrafish brain. To image the zebrafish brain with a large field of view, the thickness of the Gaussian beam inevitably becomes several times greater than the system depth of field (DOF), where the fluorescence distributions outside the DOF will also be collected, blurring the image. In this paper, we propose a 3D deblurring method, aiming to redistribute the measured intensity of each pixel in a light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high signal-to-noise ratio. These performance characteristics can facilitate subsequent processing, such as 3D neuron registration, segmentation, and recognition.
Due to its ability of optical sectioning and low phototoxicity, z-stacking light-sheet microscopy has been the tool of choice for in vivo imaging of the zebrafish brain. To image the zebrafish brain with a large field of view, the thickness of the Gaussian beam inevitably becomes several times greater than the system depth of field (DOF), where the fluorescence distributions outside the DOF will also be collected, blurring the image. In this paper, we propose a 3D deblurring method, aiming to redistribute the measured intensity of each pixel in a light-sheet image to in situ voxels by 3D deconvolution. By introducing a Hessian regularization term to maintain the continuity of the neuron distribution and using a modified stripe-removal algorithm, the reconstructed z-stack images exhibit high contrast and a high signal-to-noise ratio. These performance characteristics can facilitate subsequent processing, such as 3D neuron registration, segmentation, and recognition.
showLess
Photonics Research
Publication Date: Jun. 01, 2020
Vol. 8, Issue 6, 06001011 (2020)
Get PDF
View fulltext
Blind position detection for large field-of-view scattering imaging
Xiaoyu Wang, Xin Jin, and Junqi Li
Prior-free imaging beyond the memory effect (ME) is critical to seeing through the scattering media. However, methods proposed to exceed the ME range have suffered from the availability of prior information of imaging targets. Here, we propose a blind target position detection for large field-of-view scattering imaging. Only exploiting two captured multi-target near-field speckles at different imaging distances, the unknown number and locations of the isolated imaging targets are blindly reconstructed via the proposed scaling-vector-based detection. Autocorrelations can be calculated for the speckle regions centered by the derived positions via low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the complete scene of the multiple targets exceeding the ME range can be reconstructed without any prior information. The effectiveness of the proposed algorithm is verified by testing on a real scattering imaging system.
Prior-free imaging beyond the memory effect (ME) is critical to seeing through the scattering media. However, methods proposed to exceed the ME range have suffered from the availability of prior information of imaging targets. Here, we propose a blind target position detection for large field-of-view scattering imaging. Only exploiting two captured multi-target near-field speckles at different imaging distances, the unknown number and locations of the isolated imaging targets are blindly reconstructed via the proposed scaling-vector-based detection. Autocorrelations can be calculated for the speckle regions centered by the derived positions via low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the complete scene of the multiple targets exceeding the ME range can be reconstructed without any prior information. The effectiveness of the proposed algorithm is verified by testing on a real scattering imaging system.
showLess
Photonics Research
Publication Date: May. 26, 2020
Vol. 8, Issue 6, 06000920 (2020)
Get PDF
View fulltext
Edge enhancement through scattering media enabled by optical wavefront shaping
Zihao Li, Zhipeng Yu, Hui Hui, Huanhao Li, Tianting Zhong, Honglin Liu, and Puxiang Lai
Edge enhancement is a fundamental and important topic in imaging and image processing, as perception of edge is one of the keys to identify and comprehend the contents of an image. Edge enhancement can be performed in many ways, through hardware or computation. Existing methods, however, have been limited in free space or clear media for optical applications; in scattering media such as biological tissue, light is multiple scattered, and information is scrambled to a form of seemingly random speckles. Although desired, it is challenging to accomplish edge enhancement in the presence of multiple scattering. In this work, we introduce an implementation of optical wavefront shaping to achieve efficient edge enhancement through scattering media by a two-step operation. The first step is to acquire a hologram after the scattering medium, where information of the edge region is accurately encoded, while that of the nonedge region is intentionally encoded with inadequate accuracy. The second step is to decode the edge information by time-reversing the scattered light. The capability is demonstrated experimentally, and, further, the performance, as measured by the edge enhancement index (EI) and enhancement-to-noise ratio (ENR), can be controlled easily through tuning the beam ratio. EI and ENR can be reinforced by ~8.5 and ~263 folds, respectively. To the best of our knowledge, this is the first demonstration that edge information of a spatial pattern can be extracted through strong turbidity, which can potentially enrich the comprehension of actual images obtained from a complex environment.
Edge enhancement is a fundamental and important topic in imaging and image processing, as perception of edge is one of the keys to identify and comprehend the contents of an image. Edge enhancement can be performed in many ways, through hardware or computation. Existing methods, however, have been limited in free space or clear media for optical applications; in scattering media such as biological tissue, light is multiple scattered, and information is scrambled to a form of seemingly random speckles. Although desired, it is challenging to accomplish edge enhancement in the presence of multiple scattering. In this work, we introduce an implementation of optical wavefront shaping to achieve efficient edge enhancement through scattering media by a two-step operation. The first step is to acquire a hologram after the scattering medium, where information of the edge region is accurately encoded, while that of the nonedge region is intentionally encoded with inadequate accuracy. The second step is to decode the edge information by time-reversing the scattered light. The capability is demonstrated experimentally, and, further, the performance, as measured by the edge enhancement index (EI) and enhancement-to-noise ratio (ENR), can be controlled easily through tuning the beam ratio. EI and ENR can be reinforced by ~8.5 and ~263 folds, respectively. To the best of our knowledge, this is the first demonstration that edge information of a spatial pattern can be extracted through strong turbidity, which can potentially enrich the comprehension of actual images obtained from a complex environment.
showLess
Photonics Research
Publication Date: May. 28, 2020
Vol. 8, Issue 6, 06000954 (2020)
Get PDF
View fulltext
Speckle spatial correlations aiding optical transmission matrix retrieval: the smoothed Gerchberg–Saxton single-iteration algorithm
Daniele Ancora, Lorenzo Dominici, Antonio Gianfrate, Paolo Cazzato, Milena De Giorgi, Dario Ballarini, Daniele Sanvitto, and Luca Leuzzi
The estimation of the transmission matrix of a disordered medium is a challenging problem in disordered photonics. Usually, its reconstruction relies on a complex inversion that aims at connecting a fully controlled input to the deterministic interference of the light field scrambled by the device. At the moment, iterative phase retrieval protocols provide the fastest reconstructing frameworks, converging in a few tens of iterations. Exploiting the knowledge of speckle correlations, we construct a new phase retrieval algorithm that reduces the computational cost to a single iteration. Besides being faster, our method is practical because it accepts fewer measurements than state-of-the-art protocols. Thanks to reducing computation time by one order of magnitude, our result can be a step forward toward real-time optical imaging that exploits disordered devices.
The estimation of the transmission matrix of a disordered medium is a challenging problem in disordered photonics. Usually, its reconstruction relies on a complex inversion that aims at connecting a fully controlled input to the deterministic interference of the light field scrambled by the device. At the moment, iterative phase retrieval protocols provide the fastest reconstructing frameworks, converging in a few tens of iterations. Exploiting the knowledge of speckle correlations, we construct a new phase retrieval algorithm that reduces the computational cost to a single iteration. Besides being faster, our method is practical because it accepts fewer measurements than state-of-the-art protocols. Thanks to reducing computation time by one order of magnitude, our result can be a step forward toward real-time optical imaging that exploits disordered devices.
showLess
Photonics Research
Publication Date: Sep. 27, 2022
Vol. 10, Issue 10, 2349 (2022)
Get PDF
View fulltext
Antibunching and superbunching photon correlations in pseudo-natural light
Zhiyuan Ye, Hai-Bo Wang, Jun Xiong, and Kaige Wang
Since Hanbury Brown and Twiss revealed the photon bunching effect of a thermal light source in 1956, almost all studies in correlation optics have been based on light’s intensity fluctuation, regardless of fact that the polarization fluctuation is a basic attribute of natural light. In this work, we uncover the veil of the polarization fluctuation and corresponding photon correlations by proposing a new light source model, termed pseudo-natural light, embodying both intensity and polarization fluctuations. Unexpectedly, the strong antibunching and superbunching effects can be simultaneously realized in such a new source, whose second-order correlation coefficient g(2) can be continuously modulated across 1. For the symmetric Bernoulli distribution of the polarization fluctuation, particularly, g(2) can be in principle from 0 to unlimitedly large. In pseudo-natural light, while the bunching effects of both intensity and polarization fluctuations enhance the bunching to superbunching photon correlation, the antibunching correlation of the polarization fluctuation can also be extracted through the procedure of division operation in the experiment. The antibunching effect and the combination with the bunching one will arouse new applications in quantum imaging. As heuristic examples, we carry out high-quality positive or negative ghost imaging, and devise high-efficiency polarization-sensitive and edge-enhanced imaging. This work, therefore, sheds light on the development of multiple and broad correlation functions for natural light.
Since Hanbury Brown and Twiss revealed the photon bunching effect of a thermal light source in 1956, almost all studies in correlation optics have been based on light’s intensity fluctuation, regardless of fact that the polarization fluctuation is a basic attribute of natural light. In this work, we uncover the veil of the polarization fluctuation and corresponding photon correlations by proposing a new light source model, termed pseudo-natural light, embodying both intensity and polarization fluctuations. Unexpectedly, the strong antibunching and superbunching effects can be simultaneously realized in such a new source, whose second-order correlation coefficient g(2) can be continuously modulated across 1. For the symmetric Bernoulli distribution of the polarization fluctuation, particularly, g(2) can be in principle from 0 to unlimitedly large. In pseudo-natural light, while the bunching effects of both intensity and polarization fluctuations enhance the bunching to superbunching photon correlation, the antibunching correlation of the polarization fluctuation can also be extracted through the procedure of division operation in the experiment. The antibunching effect and the combination with the bunching one will arouse new applications in quantum imaging. As heuristic examples, we carry out high-quality positive or negative ghost imaging, and devise high-efficiency polarization-sensitive and edge-enhanced imaging. This work, therefore, sheds light on the development of multiple and broad correlation functions for natural light.
showLess
Photonics Research
Publication Date: Feb. 22, 2022
Vol. 10, Issue 3, 03000668 (2022)
Get PDF
View fulltext
Generalized framework for non-sinusoidal fringe analysis using deep learning
Shijie Feng, Chao Zuo, Liang Zhang, Wei Yin, and Qian Chen
Phase retrieval from fringe images is essential to many optical metrology applications. In the field of fringe projection profilometry, the phase is often obtained with systematic errors if the fringe pattern is not a perfect sinusoid. Several factors can account for non-sinusoidal fringe patterns, such as the non-linear input–output response (e.g., the gamma effect) of digital projectors, the residual harmonics in binary defocusing projection, and the image saturation due to intense reflection. Traditionally, these problems are handled separately with different well-designed methods, which can be seen as “one-to-one” strategies. Inspired by recent successful artificial intelligence-based optical imaging applications, we propose a “one-to-many” deep learning technique that can analyze non-sinusoidal fringe images resulting from different non-sinusoidal factors and even the coupling of these factors. We show for the first time, to the best of our knowledge, a trained deep neural network can effectively suppress the phase errors due to various kinds of non-sinusoidal patterns. Our work paves the way to robust and powerful learning-based fringe analysis approaches.
Phase retrieval from fringe images is essential to many optical metrology applications. In the field of fringe projection profilometry, the phase is often obtained with systematic errors if the fringe pattern is not a perfect sinusoid. Several factors can account for non-sinusoidal fringe patterns, such as the non-linear input–output response (e.g., the gamma effect) of digital projectors, the residual harmonics in binary defocusing projection, and the image saturation due to intense reflection. Traditionally, these problems are handled separately with different well-designed methods, which can be seen as “one-to-one” strategies. Inspired by recent successful artificial intelligence-based optical imaging applications, we propose a “one-to-many” deep learning technique that can analyze non-sinusoidal fringe images resulting from different non-sinusoidal factors and even the coupling of these factors. We show for the first time, to the best of our knowledge, a trained deep neural network can effectively suppress the phase errors due to various kinds of non-sinusoidal patterns. Our work paves the way to robust and powerful learning-based fringe analysis approaches.
showLess
Photonics Research
Publication Date: May. 27, 2021
Vol. 9, Issue 6, 06001084 (2021)
Get PDF
View fulltext
Fast and robust phase retrieval for masked coherent diffractive imaging
Li Song, and Edmund Y. Lam
Conventional phase retrieval algorithms for coherent diffractive imaging (CDI) require many iterations to deliver reasonable results, even using a known mask as a strong constraint in the imaging setup, an approach known as masked CDI. This paper proposes a fast and robust phase retrieval method for masked CDI based on the alternating direction method of multipliers (ADMM). We propose a plug-and-play ADMM to incorporate the prior knowledge of the mask, but note that commonly used denoisers are not suitable as regularizers for complex-valued latent images directly. Therefore, we develop a regularizer based on the structure tensor and Harris corner detector. Compared with conventional phase retrieval methods, our technique can achieve comparable reconstruction results with less time for the masked CDI. Moreover, validation experiments on real in situ CDI data for both intensity and phase objects show that our approach is more than 100 times faster than the baseline method to reconstruct one complex-valued image, making it possible to be used in challenging situations, such as imaging dynamic objects. Furthermore, phase retrieval results for single diffraction patterns show the robustness of the proposed ADMM.
Conventional phase retrieval algorithms for coherent diffractive imaging (CDI) require many iterations to deliver reasonable results, even using a known mask as a strong constraint in the imaging setup, an approach known as masked CDI. This paper proposes a fast and robust phase retrieval method for masked CDI based on the alternating direction method of multipliers (ADMM). We propose a plug-and-play ADMM to incorporate the prior knowledge of the mask, but note that commonly used denoisers are not suitable as regularizers for complex-valued latent images directly. Therefore, we develop a regularizer based on the structure tensor and Harris corner detector. Compared with conventional phase retrieval methods, our technique can achieve comparable reconstruction results with less time for the masked CDI. Moreover, validation experiments on real in situ CDI data for both intensity and phase objects show that our approach is more than 100 times faster than the baseline method to reconstruct one complex-valued image, making it possible to be used in challenging situations, such as imaging dynamic objects. Furthermore, phase retrieval results for single diffraction patterns show the robustness of the proposed ADMM.
showLess
Photonics Research
Publication Date: Mar. 01, 2022
Vol. 10, Issue 3, 03000758 (2022)
Get PDF
View fulltext
Topics
Adaptive Optics
Array Waveguide Devices
Atmospheric and Oceanic Optics
Coherence and Statistical Optics
Comments
Correction
Diffraction and Gratings
Digital Holography
Dispersion
Editorial
Fiber Devices
Fiber Optic Sensors
Fiber Optics
Fiber Optics and Optical Communications
Group Iv Photonics
Holography
Holography, Gratings, and Diffraction
Image Processing
Image Processing and Image Analysis
Imaging
Imaging Systems
Imaging Systems, Microscopy, and Displays
Instrumentation and Measurements
Integrated Optics
Integrated Optics Devices
Integrated Photonics
INTEGRATED PHOTONICS: CHALLENGES AND PERSPECTIVES
Interferometry
Interview
Laser Materials
Laser Materials Processing
Lasers and Laser Optics
Light-emitting Diodes
Liquid-Crystal Devices
Materials
Medical Optics and Biotechnology
Metamaterials
Microlasers
Microscopy
Microwave Photonics
Mode-locked Lasers
Nanomaterials
Nanophotonics
Nanophotonics and Photonic Crystals
Nanostructures
Nonlinear Optic
Nonlinear Optics
Optical and Photonic Materials
Optical Communications
Optical Communications and Interconnects
Optical Devices
Optical Manipulation
Optical Materials
OPTICAL MICROCAVITIES
Optical Resonators
Optical Trapping and Manipulation
Optical Vortices
Optics at Surfaces
Optoelectronics
Photodetectors
Photon Statistics
Photonic Crystals
Photonic Crystals and Devices
Photonic Manipulation
Photonic Manipulation
Physical Optics
Plasmonics
Plasmonics and Metamaterials
Polarization
Polarization and Ellipsometry
Polarization Rotators
Pulse Propagation and Temporal Solitons
Quantum Electrodynamics
Quantum Optics
QUANTUM PHOTONICS
Quantum Well Devices
Regular Papers
Remote Sensing and Sensors
Research Articles
Resonators
Scattering
Semiconductor UV Photonics
Sensors
Silicon Photonics
Spectroscopy
Surface Optics and Plasmonics
Surface Plasmons
Surface Waves
Terahertz Photonics: Applications and Techniques
Thin Film Devices
Thin Films
Ultrafast Optics