Journals
Advanced Photonics
Photonics Insights
Advanced Photonics Nexus
Photonics Research
Advanced Imaging
View All Journals
Chinese Optics Letters
High Power Laser Science and Engineering
Articles
Optics
Physics
Geography
View All Subjects
Conferences
CIOP
HPLSE
AP
View All Events
News
About CLP
Search by keywords or author
Login
Registration
Login in
Registration
Search
Search
Articles
Journals
News
Advanced Search
Top Searches
laser
the
2D Materials
Transformation optics
Quantum Photonics
Home
About
Issue in Progress
Current Issue
Special Issues
All Issues
Special Events
Journals >
>
Topics >
Imaging Systems, Microscopy, and Displays
Contents
Imaging Systems, Microscopy, and Displays
|
65 Article(s)
Frequency-multiplexing photon-counting multi-beam LiDAR
Tianxiang Zheng, Guangyue Shen, Zhaohui Li, Lei Yang, Haiyan Zhang, E Wu, and Guang Wu
We report a frequency-multiplexing method for multi-beam photon-counting light detection and ranging (LiDAR), where only one single-pixel single-photon detector is employed to simultaneously detect the multi-beam echoes. In this frequency-multiplexing multi-beam LiDAR, each beam is from an independent laser source with different repetition rates and independent phases. As a result, the photon counts from different beams could be discriminated from each other due to the strong correlation between the laser pulses and their respective echo photons. A 16-beam LiDAR system was demonstrated in three-dimensional laser imaging with 16 pulsed laser diodes at 850 nm and one single-photon detector based on a Si-avalanche photodiode. This frequency-multiplexing method can greatly reduce the number of single-photon detectors in multi-beam LiDAR systems, which may be useful for low-cost and eye-safe LiDAR applications.
We report a frequency-multiplexing method for multi-beam photon-counting light detection and ranging (LiDAR), where only one single-pixel single-photon detector is employed to simultaneously detect the multi-beam echoes. In this frequency-multiplexing multi-beam LiDAR, each beam is from an independent laser source with different repetition rates and independent phases. As a result, the photon counts from different beams could be discriminated from each other due to the strong correlation between the laser pulses and their respective echo photons. A 16-beam LiDAR system was demonstrated in three-dimensional laser imaging with 16 pulsed laser diodes at 850 nm and one single-photon detector based on a Si-avalanche photodiode. This frequency-multiplexing method can greatly reduce the number of single-photon detectors in multi-beam LiDAR systems, which may be useful for low-cost and eye-safe LiDAR applications.
showLess
Photonics Research
Publication Date: Nov. 11, 2019
Vol. 7, Issue 12, 12001381 (2019)
Get PDF
View fulltext
Optofluidics in bio-imaging applications
Sihui Chen, Rui Hao, Yi Zhang, and Hui Yang
Bio-imaging generally indicates imaging techniques that acquire biological information from living forms. Recently, the ability to detect, diagnose, and monitor pathological, physiological, and molecular dynamics is in great demand, while scaling down the observing angle, achieving precise alignment, fast actuation, and a miniaturized platform become key elements in next-generation optical imaging systems. Optofluidics, nominally merging optic and microfluidic technologies, is a relatively new research field, and it has drawn great attention since the last decade. Given its abilities to manipulate both optic and fluidic functions/elements in the micro-/nanometer regime, optofluidics shows great potential in bio-imaging to elevate our cognition in the subcellular and/or molecular level. In this paper, we emphasize the development of optofluidics in bio-imaging, from individual components to representative applications in a more modularized, systematic sense. Further, we expound our expectations for the near future of the optofluidic imaging discipline.
Bio-imaging generally indicates imaging techniques that acquire biological information from living forms. Recently, the ability to detect, diagnose, and monitor pathological, physiological, and molecular dynamics is in great demand, while scaling down the observing angle, achieving precise alignment, fast actuation, and a miniaturized platform become key elements in next-generation optical imaging systems. Optofluidics, nominally merging optic and microfluidic technologies, is a relatively new research field, and it has drawn great attention since the last decade. Given its abilities to manipulate both optic and fluidic functions/elements in the micro-/nanometer regime, optofluidics shows great potential in bio-imaging to elevate our cognition in the subcellular and/or molecular level. In this paper, we emphasize the development of optofluidics in bio-imaging, from individual components to representative applications in a more modularized, systematic sense. Further, we expound our expectations for the near future of the optofluidic imaging discipline.
showLess
Photonics Research
Publication Date: Apr. 17, 2019
Vol. 7, Issue 5, 05000532 (2019)
Get PDF
View fulltext
Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy
Yao Fan, Jiasong Sun, Qian Chen, Xiangpeng Pan, Lei Tian, and Chao Zuo
Differential phase contrast microscopy (DPC) provides high-resolution quantitative phase distribution of thin transparent samples under multi-axis asymmetric illuminations. Typically, illumination in DPC microscopic systems is designed with two-axis half-circle amplitude patterns, which, however, result in a non-isotropic phase contrast transfer function (PTF). Efforts have been made to achieve isotropic DPC by replacing the conventional half-circle illumination aperture with radially asymmetric patterns with three-axis illumination or gradient amplitude patterns with two-axis illumination. Nevertheless, the underlying theoretical mechanism of isotropic PTF has not been explored, and thus, the optimal illumination scheme cannot be determined. Furthermore, the frequency responses of the PTFs under these engineered illuminations have not been fully optimized, leading to suboptimal phase contrast and signal-to-noise ratio for phase reconstruction. In this paper, we provide a rigorous theoretical analysis about the necessary and sufficient conditions for DPC to achieve isotropic PTF. In addition, we derive the optimal illumination scheme to maximize the frequency response for both low and high frequencies (from 0 to 2NAobj) and meanwhile achieve perfectly isotropic PTF with only two-axis intensity measurements. We present the derivation, implementation, simulation, and experimental results demonstrating the superiority of our method over existing illumination schemes in both the phase reconstruction accuracy and noise-robustness.
Differential phase contrast microscopy (DPC) provides high-resolution quantitative phase distribution of thin transparent samples under multi-axis asymmetric illuminations. Typically, illumination in DPC microscopic systems is designed with two-axis half-circle amplitude patterns, which, however, result in a non-isotropic phase contrast transfer function (PTF). Efforts have been made to achieve isotropic DPC by replacing the conventional half-circle illumination aperture with radially asymmetric patterns with three-axis illumination or gradient amplitude patterns with two-axis illumination. Nevertheless, the underlying theoretical mechanism of isotropic PTF has not been explored, and thus, the optimal illumination scheme cannot be determined. Furthermore, the frequency responses of the PTFs under these engineered illuminations have not been fully optimized, leading to suboptimal phase contrast and signal-to-noise ratio for phase reconstruction. In this paper, we provide a rigorous theoretical analysis about the necessary and sufficient conditions for DPC to achieve isotropic PTF. In addition, we derive the optimal illumination scheme to maximize the frequency response for both low and high frequencies (from 0 to 2NAobj) and meanwhile achieve perfectly isotropic PTF with only two-axis intensity measurements. We present the derivation, implementation, simulation, and experimental results demonstrating the superiority of our method over existing illumination schemes in both the phase reconstruction accuracy and noise-robustness.
showLess
Photonics Research
Publication Date: Jul. 26, 2019
Vol. 7, Issue 8, 08000890 (2019)
Get PDF
View fulltext
High-speed dual-view band-limited illumination profilometry using temporally interlaced acquisition
Cheng Jiang, Patrick Kilcullen, Yingming Lai, Tsuneyuki Ozaki, and Jinyang Liang
We report dual-view band-limited illumination profilometry (BLIP) with temporally interlaced acquisition (TIA) for high-speed, three-dimensional (3D) imaging. Band-limited illumination based on a digital micromirror device enables sinusoidal fringe projection at up to 4.8 kHz. The fringe patterns are captured alternately by two high-speed cameras. A new algorithm, which robustly matches pixels in acquired images, recovers the object’s 3D shape. The resultant TIA–BLIP system enables 3D imaging over 1000 frames per second on a field of view (FOV) of up to 180 mm × 130 mm (corresponding to 1180×860 pixels) in captured images. We demonstrated TIA–BLIP’s performance by imaging various static and fast-moving 3D objects. TIA–BLIP was applied to imaging glass vibration induced by sound and glass breakage by a hammer. Compared to existing methods in multiview phase-shifting fringe projection profilometry, TIA–BLIP eliminates information redundancy in data acquisition, which improves the 3D imaging speed and the FOV. We envision TIA–BLIP to be broadly implemented in diverse scientific studies and industrial applications.
We report dual-view band-limited illumination profilometry (BLIP) with temporally interlaced acquisition (TIA) for high-speed, three-dimensional (3D) imaging. Band-limited illumination based on a digital micromirror device enables sinusoidal fringe projection at up to 4.8 kHz. The fringe patterns are captured alternately by two high-speed cameras. A new algorithm, which robustly matches pixels in acquired images, recovers the object’s 3D shape. The resultant TIA–BLIP system enables 3D imaging over 1000 frames per second on a field of view (FOV) of up to 180 mm × 130 mm (corresponding to 1180×860 pixels) in captured images. We demonstrated TIA–BLIP’s performance by imaging various static and fast-moving 3D objects. TIA–BLIP was applied to imaging glass vibration induced by sound and glass breakage by a hammer. Compared to existing methods in multiview phase-shifting fringe projection profilometry, TIA–BLIP eliminates information redundancy in data acquisition, which improves the 3D imaging speed and the FOV. We envision TIA–BLIP to be broadly implemented in diverse scientific studies and industrial applications.
showLess
Photonics Research
Publication Date: Oct. 30, 2020
Vol. 8, Issue 11, 11001808 (2020)
Get PDF
View fulltext
Design and analysis of extended depth of focus metalenses for achromatic computational imaging
Luocheng Huang, James Whitehead, Shane Colburn, and Arka Majumdar
Metasurface optics have demonstrated vast potential for implementing traditional optical components in an ultracompact and lightweight form factor. Metasurfaces, however, suffer from severe chromatic aberrations, posing serious limitations on their practical use. Existing approaches for circumventing this involving dispersion engineering are limited to small apertures and often entail multiple scatterers per unit cell with small feature sizes. Here, we present an alternative technique to mitigate chromatic aberration and demonstrate high-quality, full-color imaging using extended depth of focus (EDOF) metalenses and computational reconstruction. Previous EDOF metalenses have relied on cubic phase masks, where the image quality suffers from asymmetric artefacts. Here we demonstrate the use of rotationally symmetric masks, including logarithmic-aspherical, and shifted axicon masks, to mitigate this problem. Our work will inspire further development in achromatic metalenses beyond dispersion engineering and hybrid optical–digital metasurface systems.
Metasurface optics have demonstrated vast potential for implementing traditional optical components in an ultracompact and lightweight form factor. Metasurfaces, however, suffer from severe chromatic aberrations, posing serious limitations on their practical use. Existing approaches for circumventing this involving dispersion engineering are limited to small apertures and often entail multiple scatterers per unit cell with small feature sizes. Here, we present an alternative technique to mitigate chromatic aberration and demonstrate high-quality, full-color imaging using extended depth of focus (EDOF) metalenses and computational reconstruction. Previous EDOF metalenses have relied on cubic phase masks, where the image quality suffers from asymmetric artefacts. Here we demonstrate the use of rotationally symmetric masks, including logarithmic-aspherical, and shifted axicon masks, to mitigate this problem. Our work will inspire further development in achromatic metalenses beyond dispersion engineering and hybrid optical–digital metasurface systems.
showLess
Photonics Research
Publication Date: Sep. 25, 2020
Vol. 8, Issue 10, 10001613 (2020)
Get PDF
View fulltext
Simultaneous dual-contrast three-dimensional imaging in live cells via optical diffraction tomography and fluorescence
Chen Liu, Michael Malek, Ivan Poon, Lanzhou Jiang, Arif M. Siddiquee, Colin J. R. Sheppard, Ann Roberts, Harry Quiney, Douguo Zhang, Xiaocong Yuan, Jiao Lin, Christian Depeursinge, Pierre Marquet, and Shan Shan Kou
We report a dual-contrast method of simultaneously measuring and visualizing the volumetric structural information in live biological samples in three-dimensional (3D) space. By introducing a direct way of deriving the 3D scattering potential of the object from the synthesized angular spectra, we obtain the quantitative subcellular morphology in refractive indices (RIs) side-by-side with its fluorescence signals. The additional contrast in RI complements the fluorescent signal, providing additional information of the targeted zones. The simultaneous dual-contrast 3D mechanism unveiled interesting information inaccessible with previous methods, as we demonstrated in the human immune cell (T cell) experiment. Further validation has been demonstrated using a Monte Carlo model.
We report a dual-contrast method of simultaneously measuring and visualizing the volumetric structural information in live biological samples in three-dimensional (3D) space. By introducing a direct way of deriving the 3D scattering potential of the object from the synthesized angular spectra, we obtain the quantitative subcellular morphology in refractive indices (RIs) side-by-side with its fluorescence signals. The additional contrast in RI complements the fluorescent signal, providing additional information of the targeted zones. The simultaneous dual-contrast 3D mechanism unveiled interesting information inaccessible with previous methods, as we demonstrated in the human immune cell (T cell) experiment. Further validation has been demonstrated using a Monte Carlo model.
showLess
Photonics Research
Publication Date: Aug. 14, 2019
Vol. 7, Issue 9, 09001042 (2019)
Get PDF
View fulltext
Cross-cumulant enhanced radiality nanoscopy for multicolor superresolution subcellular imaging
Zhiping Zeng, Jing Ma, and Canhua Xu
Fluorescence fluctuation-based superresolution techniques can achieve fast superresolution imaging on a cost-effective wide-field platform at a low light level with reduced phototoxicity. However, the current methods exhibit certain imaging deficiencies that misinterpret nanoscale features reconstructed from fluctuating image sequences, thus degrading the superresolution imaging quality and performance. Here we propose cross-cumulant enhanced radiality nanoscopy (CERN), which employs cross-cumulant analysis in tandem with radiality processing. We demonstrated that CERN can significantly improve the spatial resolution at a low light level while eliminating the misinterpretations of nanoscale features of the existing fluctuation-based superresolution methods. In the experiment, we further verified the superior performance of CERN over the current methods through performing multicolor superresolution imaging of subcellular microtubule networks and clathrin-coated pits as well as the high-precision reconstruction of densely packed RNA transcripts.
Fluorescence fluctuation-based superresolution techniques can achieve fast superresolution imaging on a cost-effective wide-field platform at a low light level with reduced phototoxicity. However, the current methods exhibit certain imaging deficiencies that misinterpret nanoscale features reconstructed from fluctuating image sequences, thus degrading the superresolution imaging quality and performance. Here we propose cross-cumulant enhanced radiality nanoscopy (CERN), which employs cross-cumulant analysis in tandem with radiality processing. We demonstrated that CERN can significantly improve the spatial resolution at a low light level while eliminating the misinterpretations of nanoscale features of the existing fluctuation-based superresolution methods. In the experiment, we further verified the superior performance of CERN over the current methods through performing multicolor superresolution imaging of subcellular microtubule networks and clathrin-coated pits as well as the high-precision reconstruction of densely packed RNA transcripts.
showLess
Photonics Research
Publication Date: May. 26, 2020
Vol. 8, Issue 6, 06000893 (2020)
Get PDF
View fulltext
Super-resolution compressive spectral imaging via two-tone adaptive coding
Chang Xu, Tingfa Xu, Ge Yan, Xu Ma, Yuhan Zhang, Xi Wang, Feng Zhao, and Gonzalo R. Arce
Coded apertures with random patterns are extensively used in compressive spectral imagers to sample the incident scene in the image plane. Random samplings, however, are inadequate to capture the structural characteristics of the underlying signal due to the sparsity and structure nature of sensing matrices in spectral imagers. This paper proposes a new approach for super-resolution compressive spectral imaging via adaptive coding. In this method, coded apertures are optimally designed based on a two-tone adaptive compressive sensing (CS) framework to improve the reconstruction resolution and accuracy of the hyperspectral imager. A liquid crystal tunable filter (LCTF) is used to scan the incident scene in the spectral domain to successively select different spectral channels. The output of the LCTF is modulated by the adaptive coded aperture patterns and then projected onto a low-resolution detector array. The coded aperture patterns are implemented by a digital micromirror device (DMD) with higher resolution than that of the detector. Due to the strong correlation across the spectra, the recovered images from previous spectral channels can be used as a priori information to design the adaptive coded apertures for sensing subsequent spectral channels. In particular, the coded apertures are constructed from the a priori spectral images via a two-tone hard thresholding operation that respectively extracts the structural characteristics of bright and dark regions in the underlying scenes. Super-resolution image reconstruction within a spectral channel can be recovered from a few snapshots of low-resolution measurements. Since no additional side information of the spectral scene is needed, the proposed method does not increase the system complexity. Based on the mutual-coherence criterion, the proposed adaptive CS framework is proved theoretically to promote the sensing efficiency of the spectral images. Simulations and experiments are provided to demonstrate and assess the proposed adaptive coding method. Finally, the underlying concepts are extended to a multi-channel method to compress the hyperspectral data cube in the spatial and spectral domains simultaneously.
Coded apertures with random patterns are extensively used in compressive spectral imagers to sample the incident scene in the image plane. Random samplings, however, are inadequate to capture the structural characteristics of the underlying signal due to the sparsity and structure nature of sensing matrices in spectral imagers. This paper proposes a new approach for super-resolution compressive spectral imaging via adaptive coding. In this method, coded apertures are optimally designed based on a two-tone adaptive compressive sensing (CS) framework to improve the reconstruction resolution and accuracy of the hyperspectral imager. A liquid crystal tunable filter (LCTF) is used to scan the incident scene in the spectral domain to successively select different spectral channels. The output of the LCTF is modulated by the adaptive coded aperture patterns and then projected onto a low-resolution detector array. The coded aperture patterns are implemented by a digital micromirror device (DMD) with higher resolution than that of the detector. Due to the strong correlation across the spectra, the recovered images from previous spectral channels can be used as a priori information to design the adaptive coded apertures for sensing subsequent spectral channels. In particular, the coded apertures are constructed from the a priori spectral images via a two-tone hard thresholding operation that respectively extracts the structural characteristics of bright and dark regions in the underlying scenes. Super-resolution image reconstruction within a spectral channel can be recovered from a few snapshots of low-resolution measurements. Since no additional side information of the spectral scene is needed, the proposed method does not increase the system complexity. Based on the mutual-coherence criterion, the proposed adaptive CS framework is proved theoretically to promote the sensing efficiency of the spectral images. Simulations and experiments are provided to demonstrate and assess the proposed adaptive coding method. Finally, the underlying concepts are extended to a multi-channel method to compress the hyperspectral data cube in the spatial and spectral domains simultaneously.
showLess
Photonics Research
Publication Date: Feb. 28, 2020
Vol. 8, Issue 3, 03000395 (2020)
Get PDF
View fulltext
Subwavelength imaging and detection using adjustable and movable droplet microlenses
Xixi Chen, Tianli Wu, Zhiyong Gong, Yuchao Li, Yao Zhang, and Baojun Li
We developed adjustable and movable droplet microlenses consisting of a liquid with a high refractive index. The microlenses were prepared via ultrasonic shaking in deionized water, and the diameter of the microlenses ranged from 1 to 50 μm. By stretching the microlenses, the focal length can be adjusted from 13 to 25 μm. With the assistance of an optical tweezer, controllable assembly and movement of microlens arrays were also realized. The results showed that an imaging system combined with droplet microlenses could image 80 nm beads under white light illumination. Using the droplet microlenses, fluorescence emission at 550 nm from CdSe@ZnS quantum dots was efficiently excited and collected. Moreover, Raman scattering signals from a silicon wafer were enhanced by ~19 times. The presented droplet microlenses may offer new opportunities for flexible liquid devices in subwavelength imaging and detection.
We developed adjustable and movable droplet microlenses consisting of a liquid with a high refractive index. The microlenses were prepared via ultrasonic shaking in deionized water, and the diameter of the microlenses ranged from 1 to 50 μm. By stretching the microlenses, the focal length can be adjusted from 13 to 25 μm. With the assistance of an optical tweezer, controllable assembly and movement of microlens arrays were also realized. The results showed that an imaging system combined with droplet microlenses could image 80 nm beads under white light illumination. Using the droplet microlenses, fluorescence emission at 550 nm from CdSe@ZnS quantum dots was efficiently excited and collected. Moreover, Raman scattering signals from a silicon wafer were enhanced by ~19 times. The presented droplet microlenses may offer new opportunities for flexible liquid devices in subwavelength imaging and detection.
showLess
Photonics Research
Publication Date: Feb. 05, 2020
Vol. 8, Issue 3, 03000225 (2020)
Get PDF
View fulltext
Computational 4D imaging of light-in-flight with relativistic effects
Yue Zheng, Ming-Jie Sun, Zhi-Guang Wang, and Daniele Faccio
Light-in-flight imaging enables the visualization and characterization of light propagation, which provides essential information for the study of the fundamental phenomena of light. A camera images an object by sensing the light emitted or reflected from it, and interestingly, when a light pulse itself is to be imaged, the relativistic effects, caused by the fact that the distance a pulse travels between consecutive frames is of the same scale as the distance that scattered photons travel from the pulse to the camera, must be accounted for to acquire accurate space–time information of the light pulse. Here, we propose a computational light-in-flight imaging scheme that records the projection of light-in-flight on a transverse x?y plane using a single-photon avalanche diode camera, calculates z and t information of light-in-flight via an optical model, and therefore reconstructs its accurate (x, y, z, t) four-dimensional information. The proposed scheme compensates the temporal distortion in the recorded arrival time to retrieve the accurate time of a light pulse, with respect to its corresponding spatial location, without performing any extra measurements. Experimental light-in-flight imaging in a three-dimensional space of 375 mm×75 mm×50 mm is performed, showing that the position error is 1.75 mm, and the time error is 3.84 ps despite the fact that the camera time resolution is 55 ps, demonstrating the feasibility of the proposed scheme. This work provides a method to expand the recording and measuring of repeatable transient events with extremely weak scattering to four dimensions and can be applied to the observation of optical phenomena with ps temporal resolution.
Light-in-flight imaging enables the visualization and characterization of light propagation, which provides essential information for the study of the fundamental phenomena of light. A camera images an object by sensing the light emitted or reflected from it, and interestingly, when a light pulse itself is to be imaged, the relativistic effects, caused by the fact that the distance a pulse travels between consecutive frames is of the same scale as the distance that scattered photons travel from the pulse to the camera, must be accounted for to acquire accurate space–time information of the light pulse. Here, we propose a computational light-in-flight imaging scheme that records the projection of light-in-flight on a transverse x?y plane using a single-photon avalanche diode camera, calculates z and t information of light-in-flight via an optical model, and therefore reconstructs its accurate (x, y, z, t) four-dimensional information. The proposed scheme compensates the temporal distortion in the recorded arrival time to retrieve the accurate time of a light pulse, with respect to its corresponding spatial location, without performing any extra measurements. Experimental light-in-flight imaging in a three-dimensional space of 375 mm×75 mm×50 mm is performed, showing that the position error is 1.75 mm, and the time error is 3.84 ps despite the fact that the camera time resolution is 55 ps, demonstrating the feasibility of the proposed scheme. This work provides a method to expand the recording and measuring of repeatable transient events with extremely weak scattering to four dimensions and can be applied to the observation of optical phenomena with ps temporal resolution.
showLess
Photonics Research
Publication Date: Jun. 03, 2020
Vol. 8, Issue 7, 07001072 (2020)
Get PDF
View fulltext
Topics
Adaptive Optics
Array Waveguide Devices
Atmospheric and Oceanic Optics
Category Pending
Coherence and Statistical Optics
Comments
Correction
Diffraction and Gratings
Dispersion
Editorial
Fiber Devices
Fiber Optic Sensors
Fiber Optics
Fiber Optics and Optical Communications
Group Iv Photonics
Holography
Holography, Gratings, and Diffraction
Image Processing
Image Processing and Image Analysis
Imaging
Imaging Systems
Imaging Systems, Microscopy, and Displays
Instrumentation and Measurements
Integrated Optics
Integrated Optics Devices
Integrated Photonics
INTEGRATED PHOTONICS: CHALLENGES AND PERSPECTIVES
Interferometry
Interview
introduction
Laser Materials
Laser Materials Processing
Lasers and Laser Optics
Light-emitting Diodes
Liquid-Crystal Devices
Materials
Medical Optics and Biotechnology
Metamaterials
Microlasers
Microscopy
Microwave Photonics
Mode-locked Lasers
Nanomaterials
Nanophotonics
Nanophotonics and Photonic Crystals
Nanostructures
Nonlinear Optic
Nonlinear Optics
Optical and Photonic Materials
Optical Communications
Optical Communications and Interconnects
Optical Devices
Optical Manipulation
Optical Materials
OPTICAL MICROCAVITIES
Optical Resonators
Optical Trapping and Manipulation
Optical Vortices
Optics at Surfaces
Optoelectronics
Photodetectors
Photon Statistics
Photonic Crystals
Photonic Crystals and Devices
Photonic Manipulation
Photonic Manipulation
Physical Optics
Plasmonics
Plasmonics and Metamaterials
Polarization
Polarization and Ellipsometry
Polarization Rotators
Pulse Propagation and Temporal Solitons
Quantum Electrodynamics
Quantum Optics
QUANTUM PHOTONICS
Quantum Well Devices
Regular Papers
Remote Sensing and Sensors
Research Articles
Resonators
Scattering
Semiconductor UV Photonics
Sensors
Silicon Photonics
Spectroscopy
Surface Optics and Plasmonics
Surface Plasmons
Surface Waves
Terahertz Photonics: Applications and Techniques
Thin Film Devices
Thin Films
Ultrafast Optics