Journals
Advanced Photonics
Photonics Insights
Advanced Photonics Nexus
Photonics Research
Advanced Imaging
View All Journals
Chinese Optics Letters
High Power Laser Science and Engineering
Articles
Optics
Physics
Geography
View All Subjects
Conferences
CIOP
HPLSE
AP
View All Events
News
About CLP
Search by keywords or author
Login
Registration
Login in
Registration
Search
Search
Articles
Journals
News
Advanced Search
Top Searches
laser
the
2D Materials
Transformation optics
Quantum Photonics
Home
All Issues
Journals >
Opto-Electronic Engineering
Contents
2023
Volume: 50 Issue 10
8 Article(s)
Select format
EndNote (RIS)
BibTex
Plain Text
Export citation format
Research Articles
Far-field computational optical imaging techniques based on synthetic aperture: a review
Sheng Li, Bowen Wang, Haitao Guan, Kunyao Liang, Yan Hu, Yan Zou, Xu Zhang, Qian Chen, and Chao Zuo
Conventional optical imaging is essentially a process of recording and reproducing the intensity signal of a scene in the spatial dimension with direct uniform sampling. Therefore, the resolution and information content of imaging are inevitably constrained by several physical limitations, such as optical diffraction l
Conventional optical imaging is essentially a process of recording and reproducing the intensity signal of a scene in the spatial dimension with direct uniform sampling. Therefore, the resolution and information content of imaging are inevitably constrained by several physical limitations, such as optical diffraction limit and spatial bandwidth product of the imaging system. How to break these physical limitations and obtain higher resolution and broader image field of view has been an eternal topic in this field. Computational optical imaging, by combining front-end optical modulation with back-end signal processing, offers a new approach to surpassing the diffraction limit of imaging systems and realizing super-resolution imaging. In this paper, we introduce the relevant research efforts on improving imaging resolution and expanding the spatial bandwidth product through computational optical synthetic aperture imaging, including the basic theory and technologies based on coherent active synthetic aperture imaging and incoherent passive synthetic aperture imaging. Furthermore, this paper reveals the pressing demand for "incoherent, passive, and beyond-diffraction-limit" imaging, identifies the bottlenecks, and provides an outlook on future research directions and potential technical approaches to address these challenges..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230090-1 (2023)
Get PDF
View fulltext
Study on retinal OCT segmentation with dual-encoder
Minghui Chen, Teng Wang, Yuan Yuan, and Shuting Ke
There are noises and speckles in OCT retinal images, and a single extraction of spatial features is often easy to miss some important information. Therefore, the target region cannot be accurately segmented. OCT images themselves have spectral frequency domain characteristics. Aiming at the frequency domain characteris
There are noises and speckles in OCT retinal images, and a single extraction of spatial features is often easy to miss some important information. Therefore, the target region cannot be accurately segmented. OCT images themselves have spectral frequency domain characteristics. Aiming at the frequency domain characteristics of OCT images, this paper proposes a new dual encoder model based on U-Net and fast Fourier convolution to improve the segmentation performance of the retinal layer and liquid in OCT images. The proposed frequency encoder can extract image frequency domain information and convert it into spatial information through fast Fourier convolution. The lack of feature information that can be omitted by a single space encoder will be well-complemented. After comparison with other classical models and ablation experiments, the results show that with the addition of a frequency domain encoder, the model can effectively improve the segmentation performance of the retinal layer and liquid. Both average Dice coefficient and mIoU are increased by 2% compared with U-Net. They are increased by 8% and 4% compared with ReLayNet, respectively. Among them, the improvement of liquid segszmentation is particularly obvious, and the Dice coefficient is increased by 10% compared with the U-Net model..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230146-1 (2023)
Get PDF
View fulltext
Adaptive feature fusion cascade Transformer retinal vessel segmentation algorithm
Liming Liang, Baohe Lu, Pengwei Long, and Yuan Yang
An adaptive feature fusion cascaded Transformer retinal vessel segmentation algorithm is proposed in this paper to address issues such as pathological artifacts interference, incomplete segmentation of small vessels, and low contrast between vascular foreground and non-vascular background. Firstly, image preprocessing
An adaptive feature fusion cascaded Transformer retinal vessel segmentation algorithm is proposed in this paper to address issues such as pathological artifacts interference, incomplete segmentation of small vessels, and low contrast between vascular foreground and non-vascular background. Firstly, image preprocessing is performed through contrast-limited histogram equalization and Gamma correction to enhance vascular texture features. Secondly, an adaptive enhancing attention module is designed in the encoding part to reduce computational redundancy while eliminating noise in retinal background images. Furthermore, a cascaded ensemble Transformer module is introduced at the bottom of the encoding-decoding structure to establish dependencies between long and short-distance vascular features. Lastly, a gate-controlled feature fusion module is introduced in the decoding part to achieve semantic fusion between encoding and decoding, enhancing the smoothness of retinal vessel segmentation. Validation on public datasets DRIVE, CHASE_DB1, and STARE yielded accuracy rates of 97.09%, 97.60%, and 97.57%, sensitivity rates of 80.38%, 81.05%, and 80.32%, and specificity rates of 98.69%, 98.71%, and 98.99%, respectively. Experimental results indicate that the overall performance of this algorithm surpasses that of most existing state-of-the-art methods and holds potential value in the diagnosis of clinical ophthalmic diseases..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230161-1 (2023)
Get PDF
View fulltext
Multi-resolution feature fusion for point cloud classification and segmentation network
Zhiyong Tao, Heng Li, Miaosen Dou, and Sen Lin
To address the problem that existing networks find it difficult to learn local geometric information of point cloud effectively, a graph convolutional network that fuses multi-resolution features of point cloud is proposed. First, the local graph structure of the point cloud is constructed by the k-nearest neighbor alg
To address the problem that existing networks find it difficult to learn local geometric information of point cloud effectively, a graph convolutional network that fuses multi-resolution features of point cloud is proposed. First, the local graph structure of the point cloud is constructed by the k-nearest neighbor algorithm to better represent the local geometric structure of the point cloud. Second, a parallel channel branch is proposed based on the farthest point sampling algorithm, which obtains point clouds with different resolutions by downsampling them and then groups them. To overcome the sparse characteristics of the point cloud, a geometric mapping module is proposed to perform normalization operations on the grouped point cloud. Finally, a feature fusion module is proposed to aggregate graph features and multi-resolution features to obtain global features more effectively. Experiments are evaluated using ModelNet40, ScanObjectNN, and ShapeNet Part datasets. The experimental results show that the proposed network has state-of-the-art classification and segmentation performance..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230166-1 (2023)
Get PDF
View fulltext
Overlapping group sparsity on hyper-Laplacian prior of sparse angle CT reconstruction
Ziwen Qi, Huihua Kong, Jiaxin Li, and Jinxiao Pan
For the sparse angle projection data, the problem of artifact and noise is easy to appear in the image reconstruction of computed tomography, which is difficult to meet the requirements of industrial and medical diagnosis. In this paper, a sparse angle CT iterative reconstruction algorithm based on overlapping group sp
For the sparse angle projection data, the problem of artifact and noise is easy to appear in the image reconstruction of computed tomography, which is difficult to meet the requirements of industrial and medical diagnosis. In this paper, a sparse angle CT iterative reconstruction algorithm based on overlapping group sparsity and hyper-Laplacian prior is proposed. The overlapping group sparsity reflects the sparsity of image gradient, and the overlapping cross relation between the adjacent elements is considered from the perspective of the image gradient. The hyper-Laplacian prior can accurately approximate the heavy-tailed distribution of the image gradient and improve the overall quality of the reconstructed image. The algorithm model proposed in this paper uses alternating direction multiplier method, principal component minimization method and gradient descent method to solve the objective function. The experimental results show that under the condition of the sparse angle CT reconstruction, the proposed algorithm has certain improvement in preserving structural details and suppressing noise and staircase artifacts generated in the process of image reconstruction..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230167-1 (2023)
Get PDF
View fulltext
Adaptive tip-tilt disturbance suppression technique for characteristic disturbance frequency identification
Hongmei Wu, Chen Wang, Nian Feng, Li Wen, and Tao Tang
In order to suppress the time-varying disturbance in the tip-tilt correction system, an adaptive disturbance suppression method based on characteristic disturbance frequency identification is proposed in this paper. The least mean square error criterion is used to identify the characteristic disturbance frequency of th
In order to suppress the time-varying disturbance in the tip-tilt correction system, an adaptive disturbance suppression method based on characteristic disturbance frequency identification is proposed in this paper. The least mean square error criterion is used to identify the characteristic disturbance frequency of the closed-loop system error, and the identified filtering parameters and controller adjustment are designed in parallel. At the same time, a method based on frequency splitting is proposed: combining low-frequency disturbance with high-frequency disturbance suppression, which further improves the speed of characteristic frequency identification and simplifies the design process, and realizes adaptive disturbance suppression in the closed-loop bandwidth. The closed-loop verification of the proposed method is carried out in the tip-tilt correction device. The experimental results show that the method can quickly identify the characteristic disturbance and adaptively adjust the controller, which can improve the closed-loop performance of the system under single-frequency and multi-frequency time-varying disturbance..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230177-1 (2023)
Get PDF
View fulltext
Single image rain removal based on cross scale attention fusion
Yuchao Ye, and Ying Chen
Single-image rain removal is a crucial task in computer vision, aiming to eliminate rain streaks from rainy images and generate high-quality rain-free images. Current deep learning-based multi-scale rain removal algorithms face challenges in capturing details at different scales and neglecting information complementari
Single-image rain removal is a crucial task in computer vision, aiming to eliminate rain streaks from rainy images and generate high-quality rain-free images. Current deep learning-based multi-scale rain removal algorithms face challenges in capturing details at different scales and neglecting information complementarity among scales, which can lead to image distortion and incomplete rain streak removal. To address these issues, this paper proposes an image rain removal network based on cross-scale attention fusion, aiming to remove dense rain streaks while preserving original image details to improve the visual quality of the rain removal image. The rain removal network consists of three sub-networks, each dedicated to obtaining rain pattern information at different scales. Each sub-network is composed of densely connected cross-scale feature fusion modules. The designed module takes the cross-scale attention fusion as the core, which establishes inter-scale relationships to achieve information complementarity and enables the network to consider both details and global information. Experimental results demonstrate the effectiveness of the proposed model on synthetic datasets Rain200H and Rain200L. The peak signal-to-noise ratio (PSNR) of the derained images reaches 29.91/39.23 dB, and the structural similarity index (SSIM) is 0.92/0.99, outperforming general mainstream methods and achieving favorable visual effects while preserving image details and ensuring natural rain removal..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230191-1 (2023)
Get PDF
View fulltext
An improved lightweight fire detection algorithm based on cascade sparse query
Xiaoxue Zhang, Yu Wang, Siyuan Wu, and Bangyong Sun
To address the challenges of complex models, slow detection speed, and high false detection rate during fire detection, a lightweight fire detection algorithm is proposed based on cascading sparse query mechanism, called LFNet. In the study, firstly, a lightweight feature extraction module ECDNet is established to extr
To address the challenges of complex models, slow detection speed, and high false detection rate during fire detection, a lightweight fire detection algorithm is proposed based on cascading sparse query mechanism, called LFNet. In the study, firstly, a lightweight feature extraction module ECDNet is established to extract more fine-grained features in different levels of feature layers by embedding the lightweight attention module ECA (efficient channel attention) in YOLOv5s backbone network, which is used to solve the multi-scale of flame and smoke in fire detection. Secondly, deep feature extraction module FPN+PAN is adopted to improve multi-scale fusion of feature maps at different levels. Finally, the Cascade Sparse Query embedded lightweight cascade sparse query module is applied to improve the detection accuracy of small flames and thin smoke in early fires. Experimental results show that the comprehensive performance of the proposed method in objective indicators such as mAP and Precision is the best on SF-dataset, D-fire and FIRESENSE. Furthermore, the proposed model achieves lower parameters and higher detection accuracy, which can meet the fire detection requirements of challenge scenes..
showLess
Opto-Electronic Engineering
Publication Date: Oct. 25, 2023
Vol. 50, Issue 10, 230216-1 (2023)
Get PDF
View fulltext
Email Alert
Submit a Paper
Research Articles