Journals
Advanced Photonics
Photonics Insights
Advanced Photonics Nexus
Photonics Research
Chinese Optics Letters
View All Journals
High Power Laser Science and Engineering
Articles
Optics
Physics
Geography
View All Subjects
Conferences
CIOP
HPLSE
AP
View All Events
News
Search by keywords or author
Login
Registration
Login in
Registration
Search
Search
Articles
Journals
News
Advanced Search
Top Searches
laser
the
2D Materials
Transformation optics
Quantum Photonics
Home
All Issues
Journals >
Opto-Electronic Engineering
Contents
2021
Volume: 48 Issue 12
6 Article(s)
Select format
EndNote (RIS)
BibTex
Plain Text
Export citation format
Point cloud-image data fusion for road segmentation
Zhang Ying, Huang Yingping, Guo Zhiyang, and Zhang Chong
Road detection is the premise of vehicle automatic driving. In recent years, multi-modal data fusion based on deep learning has become a hot spot in the research of automatic driving. In this paper, convolutional neural network is used to fuse LiDAR point cloud and image datato realize road segmentation in traffic scen
Road detection is the premise of vehicle automatic driving. In recent years, multi-modal data fusion based on deep learning has become a hot spot in the research of automatic driving. In this paper, convolutional neural network is used to fuse LiDAR point cloud and image datato realize road segmentation in traffic scenes. In this paper, a variety of fusion schemes at pixel level, feature level and decision level are proposed. Especially, four cross-fusion schemes are designed in feature level fusion. Various schemes are compared, and the best fusion scheme is given. In the network architecture, the semantic segmentation convolutional neural network with encoding and decoding structure is used as the basic network to cross-fuse the point cloud normal features and RGB image features at dif-ferent levels. The fused data is restored by the decoder, and finally the detection results are obtained by using the activation function. The substantial experiments have been conducted on public KITTI data set to evaluate the performance of various fusion schemes. The results show that the fusion scheme E proposed in this paper has the best segmentation performance. Compared with other road-detection methods, our method gives better overall performance..
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 210340 (2021)
Get PDF
Multi-task learning for thermal pedestrian detection
Gou Yutao, Ma Liang, Song Yixuan, Jin Lei, and Lei Tao
Compared with high-quality RGB images, thermal images tend to have a higher false alarm rate in pede-strian detection tasks. The main reason is that thermal images are limited by imaging resolution and spectral cha-racteristics, lacking clear texture features, while some samples have poor feature quality, which interfe
Compared with high-quality RGB images, thermal images tend to have a higher false alarm rate in pede-strian detection tasks. The main reason is that thermal images are limited by imaging resolution and spectral cha-racteristics, lacking clear texture features, while some samples have poor feature quality, which interferes with the network training. We propose a thermal pedestrian algorithm based on a multi-task learning framework, which makes the following improvements based on the multiscale detection framework. First, saliency detection tasks are introduced as an auxiliary branch with the target detection network to form a multitask learning framework, which side-step the detector's attention to illuminate salient regions and their edge information in a co-learning manner. Second, the learning weight of noisy samples is suppressed by introducing the saliency strength into the classifica-tion loss function. The detection results on the publicly available KAIST dataset confirm that our learning method can effectively reduce the log-average miss rate by 4.43% compared to the baseline, RetinaNet..
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 210358 (2021)
Get PDF
Proposal application, peer review and funding of optics and optoelectronics in 2021: an overview
Sun Ling, Feng Shuai, and Zhu Guangyu
Classified by general program, young scientist fund, fund for less developed regions, key program, ex-cellent young scientist fund, and national science fund for distinguished young scholars, the programs under the grant application code F05 have been thoroughly introduced by an overview of the proposal applications, p
Classified by general program, young scientist fund, fund for less developed regions, key program, ex-cellent young scientist fund, and national science fund for distinguished young scholars, the programs under the grant application code F05 have been thoroughly introduced by an overview of the proposal applications, peer re-viewing processes and fundings of optics and optoeletronics in 2021. In order to provide insights and perspectives for scientific researchers, the corresponding data were analyzed from different aspects and the major measures of reformation in this year were introduced. In the end, some development trends in the field of optics and op-to-electronics were prospected..
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 210380 (2021)
Get PDF
Recent research progress in optical super-resolution planar meta-lenses
Zhou Yi, Liang Gaofeng, Wen Zhongquan, Zhang Zhihai, Shang Zhengguo, and Chen Gang
Breaking through the theoretical resolution limit on the optical mechanism and realizing super-resolution optical point-spread-function is important in achieving super-resolution focusing and imaging, which have great po-tential applications in laser processing, super-resolution microscopy, and telescope systems. In re
Breaking through the theoretical resolution limit on the optical mechanism and realizing super-resolution optical point-spread-function is important in achieving super-resolution focusing and imaging, which have great po-tential applications in laser processing, super-resolution microscopy, and telescope systems. In recent years, with the development of optical metasurfaces, it is capable to achieve independent control of the amplitude, phase, and polarization of the optical fields on the sub-wavelength scale, which in turn provides a more flexible means for the development of a new type of super-resolution planar super-lens. This article reviews the recent research progress of super-resolution planar meta-lenses based on the optical metasurfaces and related testing techniques. It also discusses the problems faced in this field and future research priorities and directions..
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 210399 (2021)
Get PDF
Light field depth estimation using weighted side window angular coherence
Ma Shuai, Wang Ning, Zhu Licheng, Wang Shuai, Yang Ping, and Xu Bing
Light field imaging recodes not only the intensity information of the rays, but also its direction information, and has the ability to estimate the depth of the scene. However, the accuracy of the depth estimation is easily in-fluenced by light field occlusion. This paper proposes a method of weighted side window angul
Light field imaging recodes not only the intensity information of the rays, but also its direction information, and has the ability to estimate the depth of the scene. However, the accuracy of the depth estimation is easily in-fluenced by light field occlusion. This paper proposes a method of weighted side window angular coherence to deal with different types of occlusions. Firstly, the angular patch is divided into four side window subsets, and the cohe-rence of the pixels in these subsets is measured to construct four cost volumes to solve different types of occlusion. Secondly, the weighted fusion strategy is proposed to fuse the four cost volumes to further enhance the robustness of the algorithm and retain the anti-occlusion of the algorithm. Finally, the fused cost volume is optimized by the guided filter to further improve the accuracy of depth estimation. Experimental results show that the proposed me-thod is superior to the existing methods in the quantitative index and can achieve high-precision measurement in the absolute depth measurement experiment..
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 210405 (2021)
Get PDF
[in Chinese]
.
showLess
Publication Date: Dec. 01, 2021
Vol. 48 Issue 12 1 (2021)
Get PDF
Email Alert
Submit a Paper