Over the past few decades, single-photon detection technology has rapidly developed. Single-photon avalanche diode (SPAD) detectors operating in Geiger mode have advantages such as high sensitivity, fast response speed, and strong capability for single-photon detection. As a result, they have been widely used in optical sensing fields such as quantum communication, lidar, and fluorescence lifetime imaging. SPAD arrays compatible with CMOS technology have gained significant attention due to their high integration and miniaturization. In laser radar applications, SPADs are employed to receive returning photons. However, optical signals are susceptible to environmental factors like dust and weather conditions. The received light intensity might be at the single-photon level, and high dark count noise can degrade device performance. Considering the potential harm of short-wavelength lasers to human eyes, the design of SPAD devices with low dark counts and high photon detection probabilities has become a hot research direction.
The SPAD (Fig. 1) employs a P-I-N diode structure, with the avalanche region located between the P-type drift region and the high-voltage N+ buried layer. The P epitaxial layer serves as the intrinsic region, with P-trap guard rings and virtual guard rings surrounding the P-doped region to mitigate the impact of shallow trench isolation on dark count rates (DCR). The proposed SPAD devices with GRW of 3, 4, 5 μm are simulated based on 0.18 μm BCD technology to study the impact of GRW on device performance [Fig. 2(a)]. Simulation results show that the device can only work normally at GRW of 5 μm without a large edge electric field, and it will also cause the dark count to decrease. Figure 2(c) illustrates the 2D electric field distributions when STI extends into PW. Changing STI appropriately can hardly improve the electric field strength.
The I-Vcharacteristic of the SPAD is firstly measured which exhibits avalanche breakdown voltage at around 56 V [Fig. 3(c)], showing no difference to the TCAD simulation results (Fig. 2). DCR measurement results [Fig. 4(a)] show that the variation of DCR with Vex is not obvious and this value is more dependent on temperature changes. the data demonstrates excellent performance of 0.56 s-1·μm-2 at 23 ℃
We propose an additional P-type injection enhanced P-I-N structure SPAD based on the SMIC 180 nm BCD process. The test results show that at Vex=5 V, the PDP peak of the SPAD reaches 41.5%, and the near-infrared PDP at the 905 nm wavelength is larger than 6%. At room temperature, it achieves a median DCR of 0.56 s-1·μm-2 and a very low afterpulsing probability <1.2% when quenched passively with a dead time of 14 μs.
.As high-resolution image sensors and computer technology develop, significant applications for holography have emerged in the fields of three-dimensional imaging and display, optical information processing, and intelligent optical computing. However, numerous challenges remain unresolved in both digital holography and computational holography. During the recording process of digital holography, as a kind of multiplicative noise, speckle noise becomes a prominent problem and its removal is more challenging than that of additive noise, drastically compromising the quality of the reconstructed image. Consequently, noise reduction in holograms and reconstructed images is of particular urgency. Current noise reduction methods are primarily categorized into two categories: optical-based methods and digital-based methods. In the optical-based methods, one limitation involves the cumbersome recording process of multiple holograms with speckle diversity by multiple mechanical motions, which could lower the system stability. In the digital-based methods, complex algorithm commonly reaches better noise reduction effects. However, the cost of increased processing time may impede the real-time capability of the system. Therefore, an integrated approach combining optical and digital methods can maximize noise reduction while maintaining a focus on speed. Therefore, we combine Fourier transform spectroscopy technology and digital holography technology to gain speckle diversity holograms. Then, utilizing the noise difference analysis of decrypted images at several wavelengths, we propose a weighted summation average (WSA) noise reduction method, in combination with the block matching 3D (BM3D) algorithm. As a result, the optimized effect of noise reduction can be achieved.
First, we calculate the normalized monochromatic peak signal-to-noise ratio (M-PSNR) between the reconstructed images and take the original images as a representation of the noise intensity level of the reconstructed images, which are used as an initial weighting factor. Subsequently, within a given spectral range, the wavelength centers of the three RGB components are selected respectively according to the CIE international standard. A uniform interval radius is selected for the three RGB components, and a binary weighting factor is applied to weight the selected wavelength intervals, achieving the waveband optimization. Finally, the BM3D algorithm is combined with the WSA algorithm to further reduce the noise, with the sequence of utilization also being analyzed to achieve the optimum denoising effect.
To verify the feasibility of the algorithm, we take the decryption process of the proposed optical cryptosystem as the testbed (Fig.1). Under specific conditions, 89 single-wavelength reconstructed images, spanning a wavelength range of 449-801 nm with a 4 nm wavelength interval are processed to analyze the noise reduction effect. First, the normalized M-PSNR between the reconstructed images and the original images is calculated to be used as the weighting factor (Fig. 4). The suboptimal denoising effect of this direct method is analyzed by examining the deviation of the average intensity ratio of the three RGB components from the ground truth. Hence, it is imperative to select the waveband that is closer to the true average intensity value. Second, according to the CIE international standard, initial wavelength centers of three RGB components (633 nm, 553 nm, and 453 nm) are selected. When the interval radius of 25 is chosen, the selected intervals for RGB components approximately encompass the entire waveband (Fig. 7), and optimization is performed to identify the optimal wavelength center of the three RGB components (621 nm, 549 nm, and 449 nm). Third, we perform a comparative analysis considering the symmetry of the intervals, the uniformity of the selected interval radius for the three RGB components, and the adopted weighting method, aiming to maximize the color-PSNR (C-PSNR) value between the noise reduction result and the original color image (Fig. 9). As a result, the C-PSNR reaches to 78.59 dB when the abovementioned three parameters are determined in which a symmetric interval radius of 26 for the three RGB components and a binary weight factor for weighting is utilized. Fourth, the noise reduction result is compared with that achieved by the classical color BM3D algorithm which reaches 79.15 dB. In comparison, the WSA algorithm demonstrates a faster noise reduction speed (0.75 s vs 4.14 s), while the C-PSNR value obtained by the CBM3D algorithm is comparatively larger (79.15 dB vs 78.59 dB). Considering both the noise reduction effect and processing time, we combine the two algorithms to analyze the order in which the two algorithms are used, and choose the method with the best denoising effect. Specifically, the images denoised by the WSA algorithm should be further denoised by the CBM3D algorithm to obtain the final color-denoised image. Following this sequence of the two algorithms, the C-PSNR between the final denoised image and the original color image reaches 91.11 dB.
Based on the combination of Fourier transform spectroscopy technology and digital holography technology, we propose a method for noise reduction that makes full use of the noise diversity cross all reconstructed images at varying wavelengths with hyperspectral resolution. Our WSA algorithm analyzes the difference in the noise intensity level of several wavelengths to determine the center and interval radius of the three RGB components to optimize the waveband and reduce noise. The BM3D algorithm is further applied to reduce the noise. Numerical simulation and experimental results indicate that a maximum C-PSNR value of 91.11 dB is attainable by reasonably employing the WSA and BM3D algorithms. Our composite algorithm can effectively reduce speckle noise based on the optimal selection of optical waveband and weighted factors. This method provides new insights for the noise reduction of color digital holography.
.Interferenceless coded aperture correlation holography (I-COACH) employs incoherent illumination, distinct from other coherent holography techniques. In I-COACH, phase information is redundant, and only the intensity pattern of the three-dimensional observation scene is required to recover object information. Owing to its interference-free nature, it offers the advantages of a simpler optical path and more convenient processing, transmission, and storage of data. This has led to the development of an even more simplified, lensless I-COACH (LI-COACH) system. Utilizing a coded phase mask (CPM), the object intensity is encoded into a specific speckle pattern. The object’s speckle hologram and the corresponding point spread hologram (PSH) are used for object reconstruction through cross-correlation, but this reconstruction method is associated with considerable background noise. I-COACH system based on binary coded phase mask significantly reduces reconstruction noise while expanding the imaging spectral range. In this approach, the peak-to-background ratio of the PSH is enhanced using a direct binary search method to reduce the system’s reconstruction noise. However, this method increases the bias value of the PSH during the iteration process, leading to unstable iteration effects, and the use of dictionary order scanning results in slower iteration speeds. This paper proposes a direct binary search method based on an alternate strategy with random trajectories. Numerical simulations demonstrate that this method effectively suppresses the increase in the bias value during the iteration process and enhances the iteration speed. Experimental results confirm that, with the same number of iterations, this method significantly reduces the correlation reconstruction noise. This method advances the application of the I-COACH system in imaging beyond the visible light range, offering broad application prospects in fields such as astronomy and the military.
Our design is based on the principle of DBSA and employs a variant known as binary search with random trajectory (DBSRT) to alternately iterate two binary coded phase masks (bCPMs). Initially, two independent CPMs are generated using the Gerchberg-Saxton (G-S) algorithm, which are then binarized to serve as the initial values for the DBSA. The peak-to-background ratio (PBR) is used as an indirect metric to evaluate the reconstruction effect of the PSH, and the DBSRT is utilized to optimize the PBR of the PSH to reduce reconstruction noise. To ensure the bias value is a constant during the iteration process, the improved DBSA selects bCPM1 for iteration when the number of iterations is odd, and bCPM2 when it is even. To further increase the iteration speed, a random unscanned pixel on the bCPM is selected, and its phase is inverted. The PSH and its PBR of the altered bCPMs are calculated in the meantime. If the PBR increases, the change is retained; otherwise, it is reverted to its original state. This process is continued until convergence is reached or a threshold is met. Finally, the iterated bCPMs are used as the random phase masks in the I-COACH system.
In the experiment, Element 2 of Group 1 from the USAF1951 resolution test chart is chosen as the target. Significant changes are observed in the PSH and object intensity response generated by both the initial bCPMs and those optimized using DBSA (Fig. 8). The experimental data is synthesized into PSH and object holograms, followed by reconstruction using POF. This method demonstrates a substantial improvement in imaging quality compared to the original method (Fig. 8). The images reconstructed from the initial bCPMs and the original DBSA-optimized bCPMs have PSNRs of 15.6 dB and 17.5 dB respectively, while the image from the bCPMs optimized by our method has a PSNR of 19.5 dB, indicating a better reconstruction effect. However, the reconstruction of the target object from bCPMs optimized by DBSA for only bCPM1 yielded a PSNR of 9.6 dB. This shows that sequential iteration by DBSA needs to complete iterations of both bCPM1 and bCPM2 to effectively reduce reconstruction noise. This is attributed to the variation in the bias value of the PSH when only one of the bCPMs is iterated, with the bias value being a contributing factor to noise in the reconstructed light field of the I-COACH system.
We propose an alternating strategy for iteratively optimizing bCPMs, which offers improved stability and increased speed in iterations. Through theoretical analysis and numerical simulations, it has been demonstrated that changes in the bias value during DBSA iterations are one of the crucial factors affecting the optimization results. Iterating with an alternating strategy effectively constrains the bias value of the PSH and employing random trajectory scanning significantly enhances the iteration speed of DBSA. This method has notably improved the stability and speed of DBSA. Experimental results show that this approach further enhances the system’s reconstruction performance with the same number of iterations. The alternating strategy in DBSRT effectively reduces the number of iterations required by DBSA, thereby improving the reconstruction quality of the system. This provides a straightforward and effective method for fast, high-quality reconstruction in I-COACH systems based on bCPM.
.The two-phase flow is widely used in industrial production, and the phenomenon of pipe blocking often occurs in pipeline transportation. It affects the efficiency and stability of production. At this time, it is very important to detect the process parameters of two-phase flow. To realize the detection of two-phase flow parameters without causing damage to the distribution in the measurement area, process tomography (PT) has been developed. As a kind of PT technology, electrical capacitance tomography (ECT) has the advantages of fast imaging speed, simple structure, non-invasive, and high safety performance. It has gradually become a hot spot of research in the development of visualization detection technology. The problem of image reconstruction is at the heart of ECT technology. Due to the serious nonlinearity, under characterization, and soft-field characteristics of ECT systems, ECT image reconstruction cannot be well matched with the corresponding application scenarios. ECT image reconstruction method based on fuzzy mode and sensitive field optimization has better advantages in terms of imaging effects and imaging performance indicators. 1) The sensitivity field distribution matrix corresponding to the flow pattern is selected by fuzzy pattern flow pattern identification. It greatly improves the sensitivity of different flow patterns to changes in the sensitivity field. 2) The sensitive field matrix corresponding to the flow pattern is further expanded by the sensitive field expansion method under feature extraction. It better mitigates the effect of soft field characteristics. In addition, the optimization direction of the existing ECT image reconstruction algorithms is mainly to improve the solution accuracy of the inversion problem, and it is less involved in the optimization of the reconstruction process of the sensitive field matrix and the distribution vector of the dielectric constant in the ECT image reconstruction system. Therefore, the method has good feasibility and applicability and provides a method and idea to optimize the effect of an algorithm for image reconstruction.
We propose an ECT image reconstruction method based on fuzzy pattern recognition and sensitive field optimization for the impact of the soft field characteristics of ECT on the quality of image reconstruction. This approach aims to optimize the reconstruction process of sensitive field matrices and dielectric constant distribution vectors in ECT image reconstruction systems. Firstly, the sensitivity matrix corresponding to the flow pattern attributes is selected by fuzzy pattern flow pattern identification. In this way, the sensitive field has been optimized. Secondly, feature information is extracted from the initial image reconstruction signal for data fusion. Expansion of the optimized sensitive field into a new sensitive field distribution matrix is realized by means of zero-padding and stochastic reorganization. Finally, the synthesized observation equations are constructed for image reconstruction to accurately reconstruct the permittivity distribution vector of the ECT system. In verifying the performance of the method, this method is compared with four selected image reconstruction optimization algorithms (Landweber, Tikhonov, Kalman, CGLS) in terms of imaging effectiveness and imaging metrics.
We model the 3D ECT system using COMSOL software (Fig. 6) to obtain the measured capacitance data used for the simulation experiments and the sensitive field distribution matrices corresponding to different flow patterns (Fig. 2). The proposed method is shown in the results of fuzzy pattern-based ECT flow pattern identification (Table 1). The average recognition accuracies are 100%, 99.75%, and 98.75% under no noise, 60 dB and 40 dB Gaussian white noise, respectively. This shows that the method has high recognition accuracy and robustness against noise. Six common flow patterns are imaged under 40 dB Gaussian white noise to compare the method of this paper with four optimized algorithms in terms of imaging effect and imaging performance metrics (Fig. 8). This method has a clear image with distinct edges and no serious blurring effect in the imaging effect as seen from the results of the relative errors (Table 3, Fig. 9) and correlation coefficients (Table 4, Fig. 10) of the reconstructed images. The method in this paper has the lowest correlation error and the highest correlation coefficient compared to the other 4 algorithms. This shows that the method substantially improves the image reconstruction accuracy and comes closest to the dielectric constant distribution of the original flow pattern.
To improve the accuracy of capacitive tomography image reconstruction, this paper proposes an ECT image reconstruction method based on fuzzy pattern recognition and sensitive field optimization. This method combines the optimization of sensitive fields into the image reconstruction process. The sensitive field of the flow pattern is selected by fuzzy pattern recognition. The feature information is extracted from the approximate solution for data fusion, and zero filling and random recombination are carried out to extend the matrix distribution of the sensitive field and the vector distribution of the measured capacitance. The comprehensive observation equation is constructed to solve the dielectric constant distribution vector. In addition, COMSOL software is used to build a 3D simulation model of ECT to obtain the sensitive field matrix and measure the capacitance vector. It carries out flow pattern identification experiments, simulation image reconstruction experiments, and imaging performance index calculation. The flow pattern identification results show that the method has high recognition accuracy and robustness against noise. This shows the effectiveness of the fuzzy model-based ECT flow pattern identification method. The results from the image reconstruction and imaging performance metrics show that the method proposed in this paper can obtain better ECT image reconstruction quality under the same experimental conditions. It provides a method and idea to maximize the effect of the image reconstruction optimization algorithm.
.Dynamic point target detection is vital in fields such as computer vision, remote sensing, and the military. With the technical development, there is an increasing demand for real-time and highly sensitive target detection, in which single-photon imaging has great potential and application significance. Unfortunately, most currently available single-photon detectors have only single-pixel or limited resolution, and traditional scanning imaging with these detectors will cause time waste. Therefore, single-pixel single-photon imaging based on compressed sensing has become a research hotspot. However, traditional single-photon detection relies on photon number accumulation, which requires increased time to resist shot noise interference under the extremely low target signal, thus reducing detection speed. In recent years, first-photon imaging technology has been proposed to achieve imaging by employing only one photon per pixel based on utilizing time information of the photon, but until now this technology can only be applied to active lidar systems, limiting its application scenarios. Thus, we propose a passive compressed sensing single-photon imaging method for weak target detection, which utilizes first-photon time information to improve the sensitivity and sampling speed of point target detection. Simulation analysis and experimental verification show that this method is feasible for high-precision imaging and positioning of weak targets in passive detection conditions and suitable for the simultaneous detection of multiple moving point targets. Our study is of great significance for improving the performance of weak target detection technology.
Firstly, we analyze the statistical relationship between the first-photon time and average photon number under the influence of shot noise in single-photon detection. The results show that as the average photon number increases, the probability of a smaller first-photon time increases (Fig. 1). Based on this, a point target detection method based on compressed sensing imaging with first-photon time measurement is proposed. This method employs a digital micromirror device (DMD) to spatially modulate a target with photon level and measures the arrival time of the first photon on the single-photon detector after each modulation (Fig. 2). By setting a threshold, the corresponding relationship between the target position and the modulation matrix is estimated using the first-photon time, leading to a binary measurement result of 0 or 1. Then, the target-related information can be extracted from the single photon detected after each modulation. By adopting the estimation results and modulation matrices, the point target image is reconstructed via a compressed sensing algorithm to achieve target position detection. Finally, a denoising algorithm based on frame difference is proposed to calculate the intensity difference between the neighbor pixels in adjacent frames and thus identify a reconstructed point as a target or noise point with a set threshold. As a result, the reconstructed noise can be removed from dynamic detection results, with information on moving point targets retained (Fig. 3).
Different from traditional compressive single-photon imaging based on photon number accumulation, this method leverages first-photon time information in the passive detection mode. Within each modulation, only one photon is needed to be detected. By extracting helpful information from the first-photon time and combining it with the compressed sensing algorithm, we can conduct imaging on point targets quickly and accurately and locate them with a very low sampling number and extremely low photon numbers. We first verify the effectiveness of this method by simulations. The effects of time threshold, measurement matrix sparsity, and modulation time on detection performance are studied (Fig. 4, Tables 1-3). It is proved that with optimal parameters, the point target detection probability can be higher than 99%. Then, an optical system is built for verifying the performance in real experiments. The experimental results show that for a 64 pixel×64 pixel resolution image, point targets can be accurately detected with only a 2.2% sampling rate. In multi-frame detection results of a moving target, the frame-difference denoising algorithm can remove noise points from the reconstructed results and provide the trajectory of moving point targets (Fig. 7). Furthermore, this method is also applicable to the simultaneous detection of multiple moving point targets (Fig. 8).
We propose a point target detection method based on compressive single-photon imaging with first-photon time measurement. This method breaks through the limitation of only employing single-dimensional information of photon number in passive detection by utilizing first-photon arrival time for target detection. For each modulation, at most one photon is required, which dramatically improves the utilization efficiency of photon information and achieves high-accurate point target detection in an ultra-weak environment. Additionally, compared with the traditional scanning single-photon imaging method, we adopt the compressed sensing algorithm to achieve sparse point target reconstruction and position detection at a low sampling rate. Finally, for the dynamic target detection results, we propose an adjacent frame difference algorithm that can reduce the reconstructed noise and realize high-quality detection of multiple moving point targets simultaneously. Simulations show that the probability of point target detection can be higher than 99%, and optical experiments prove that a point target can be accurately detected under a sampling rate of only 2.2%, which demonstrates the feasibility of this method in real conditions.
.As information technology develops rapidly, cameras are used not only as photography tools to meet users’ artistic creation needs but also as hardware devices for visual sensing, serving as the“eyes”of machines. They are now widely applied in 2D computer vision tasks such as image classification, semantic segmentation, and object recognition. However, traditional cameras have two inherent limitations. Firstly, to meet the resolution requirements, the depth of field range needs to be sacrificed. Beyond the depth of field range, image blurring caused by defocus can affect the normal operation of subsequent algorithms. Secondly, as traditional cameras map the 3D world onto a 2D plane, they lose the depth information of the scene, making it difficult to apply to rapidly developing 3D computer vision tasks. Existing methods for depth acquisition, such as structured light, time-of-flight, and multi-view geometry, are inferior to single-lens cameras in terms of power consumption, cost, and size. Therefore, we propose a single-camera 3D imaging method based on a double helix phase mask, which can achieve depth estimation and depth-of-field extension imaging simultaneously with simple hardware modifications.
We propose an imaging method based on a double helix phase mask that can simultaneously acquire scene depth information and achieve depth-of-field extension. By inserting a designed double helix phase mask at the aperture stop of the camera, the imaging beam is modulated into a double helix shape. On the one hand, the depth information is encoded in the image using the sensitive rotation characteristics of the double helix point spread function with defocus. On the other hand, utilizing the longer depth of focus characteristic of the double helix beam, the object points are encoded in the form of a double helix point spread function in a larger depth of field range. The depth information of object points is encoded in the image in the form of local ghosting. We combine convolutional neural networks to decode and reconstruct the encoded image end-to-end, thereby obtaining depth maps and depth of field extended images of the scene and jointly optimizing individual phase mask parameters. We analyze the influence of phase mask parameters and object distance on imaging performance and discuss the method of selecting phase mask parameters reasonably within a given depth range.
To validate our method, we train it on the FlyingThings3D dataset, and the trained model is tested on the NYU Depth V2 dataset. The relative error of depth estimation on the NYU Depth V2 dataset can reach as low as 0.083 (Table 2). The depth of field extended images can achieve the highest PSNR of 35.254 dB and SSIM of 0.960 (Table 3). Compared to traditional optical systems, the depth of field can be extended by several tens of times. Using a phase mask with more rings can result in a higher depth of field extension imaging, but it may cause a slight decrease in depth estimation accuracy and quality of the depth of field extended images due to increased side lobes of the double-helix point spread function. Nevertheless, the overall performance remains within an acceptable range. The depth estimation accuracy of our method is related to the depth range to be measured. Reducing the detection range or increasing the object distance can improve the average depth estimation accuracy (Fig. 13). For potential application scenes such as gate face recognition, a physical system is built within the test range of 1.1-1.32 m. The relative depth estimation error in real scenes is 2.2%, and the depth of field is extended by about 10 times (Fig. 17), proving the effectiveness and practicality of the proposed method in real scenes.
We introduce a three-dimensional imaging method based on a double helix phase, which only requires the addition of a phase mask to the existing lens to simultaneously estimate the depth of the scene from captured single frame images and achieve depth of field extension imaging. This method does not rely on built-in light sources and additional lenses, allowing for further reduction in size and power consumption. Compared to depth estimation algorithms solely based on deep learning, our method has excellent generalization because it identifies optically introduced features to estimate depth without relying on high-level semantic information about the scene. Overall, the method shows potential applications in low-cost 3D imaging and detection fields. However, there are limitations to the proposed method. It relies on texture and can effectively work in scenes with weak texture, but it may fail in cases where texture is severely missing due to overexposure and other factors (Fig. 14). In addition, being affected by noise in real scenes can lead to errors in some depth values, decreased accuracy of system average depth estimation, and slight artifacts in reconstructed images. Subsequent research could consider incorporating noise suppression into the algorithm to solve this problem.
.Fringe projection profilometry (FPP) has been widely adopted in numerous fields recently, owing to its advantages such as rapid measurement speed, high precision, non-contact, and cost-effectiveness. However, for objects with complex textures, the camera defocus, which is unavoidable, implies that each pixel on the imaging plane of the camera essentially represents the convolution of the point spread function (PSF) and the reflected light intensity from each point within the region. In regions where reflectivity changes abruptly, diverse reflectivities are displayed by points within the range of the PSF. This leads to cross-contamination, resulting in subsequent phase errors that ultimately impact the final reconstruction accuracy. Within the PSF range, due to the different reflectivity of each point in the area of reflectivity mutation, the reflected light at each point contaminates each other after camera defocus, resulting in phase errors that ultimately impact the final reconstruction accuracy. Conventional solutions are principally divided into two categories: One approach is to estimate the PSF distribution by dividing the phase into correct and erroneous regions and compensating for the erroneous regions using the adjacent correct regions. However, this approach relies heavily on the accuracy of the PSF estimation. The other approach incorporates the single-pixel imaging method (SIM) for error compensation, but this method is inefficient in terms of measurements and fails to accommodate a high degree of camera defocus. To address these issues, we propose a three-dimensional (3D) measurement method for complex textured objects based on bidirectional fringe projection, and a structured light 3D measurement system has been established. The results of the comparison experiments demonstrate that the proposed method can reconstruct complex textured objects with a higher level of precision under the same measurement efficiency.
In our work, we proposed a 3D measurement of complex textured objects based on bidirectional fringe projection to reduce the reconstruction errors caused by abrupt reflectivity changes of complex textures. A phase error model of the reflectivity mutation region under camera defocus was first built by theoretical analysis and simulation experiments. The model pointed out the correlation among the phase error, phase gradient, and gray gradient. Accordingly, a high-precision measurement methodology for complex textured objects based on bidirectional fringe projection was proposed. The method obtained bidirectional phase information by projecting the horizontal and vertical fringes and mapped the horizontal phase to the vertical direction using the proposed mapping method, which was linearly operated with the original vertical phase to obtain the average phase. Subsequently, the angle between the tangent line of the extracted texture edges of the object and the rectified phase gradient was computed, and the corresponding error compensation algorithm was used to process the average phase. Finally, the corrected point cloud was obtained by reconstruction.
The proposed method in this paper is compared with the horizontal conventional FPP method, the vertical conventional FPP method, and the existing method (Rao) of compensating for the phase error by estimating the PSF distribution, respectively. To ensure consistent measurement efficiency, the comparison methods are repeatedly projected. To quantitatively articulate the measurement accuracy for planar objects, such as the calibration plate and the card holder, we compute the mean absolute error (MAE) and root mean square error (RMSE) by planar fit. The measurement results of the calibration plate (Fig. 10 and Table 1) show that the proposed method has the highest reconstruction accuracy, with 45.4% and 32.0% reduction in MAE and 44.1% and 31.5% reduction in RMSE compared to the traditional repeated projection and Rao's method, respectively. The measurement results of the card holder (Figs. 11-12 and Table 2) demonstrate that the proposed method reconstructs optimally and performs the best at the detailed texture, reducing the MAE by 33.2% and 23.1% and the RMSE by 38.5% and 28.8% compared to the traditional repeated projection method and Rao's method, respectively. To prove the generalizability of our method, a curved object with complex texture, such as a vase, is measured, and the point clouds (Fig. 13) show that the reconstruction result by using this method is the smoothest, or in other words, it has the highest reconstruction accuracy. In order to describe the effectiveness of the proposed method more objectively, two measurements are made on a pure white standard step block, and a black texture is drawn on its surface during the second measurement. The MAE and RMSE of the point cloud obtained from the second measurement are calculated by taking the reconstructed point cloud before drawing the texture as the ground truth. The measurement results (Fig. 15 and Table 3) indicate that the proposed method has the highest reconstruction accuracy, with 43.2% and 29.9% reduction in MAE and 50.1% and 37.3% reduction in RMSE compared to the traditional repeated projection method and Rao's method, respectively.
In response to the challenge of measurement errors resulting from discontinuous reflectivity in complex textures within the structured light system, we propose a 3D measurement method of complex textured objects based on bidirectional fringe projection. In this paper, a phase error model for the reflectivity mutation area under camera defocus is initially established, which indicates that the phase error is associated with both the phase gradient and gray gradient. Consequently, a method is proposed, which uses bidirectional fringe to compensate for the error in differing regions according to their angles. Given the necessity to employ bidirectional phase information, another method is proposed, which maps the horizontal phase to the vertical direction. In order to verify the effectiveness of the proposed method, an FPP system is built to measure multiple complex textured objects on both flat and curved surfaces, and the proposed method is compared with other existing methods. The results of the comparison experiments demonstrate that the proposed method can reconstruct complex textured objects with a higher level of precision under the same measurement efficiency. The MAE and RMSE of the proposed method are reduced by up to 45.4% and 50.1%, respectively.
.Structured light technology has been widely used in industrial inspection, cultural relic protection, biomedical, and other fields due to its non-contact and full-field imaging advantages. As one of the mainstream three-dimensional (3D) imaging methods, phase-shifting profilometry (PSP) measures the target surface shape information by projecting multiple phase-shifted patterns and capturing the corresponding target patterns. Since the 3D shape information is closely related to the phase distribution, it is crucial to achieve high-accuracy measurements. However, due to the gamma nonlinearity in both the projector and camera, the ideal sinusoidal fringe patterns are distorted, thereby introducing errors. This kind of non-sinusoidal fringe pattern results in phase errors, which are a major error source affecting the three-dimensional reconstruction accuracy and further degrading the measurement precision. Although the large-step PSP can reduce the nonlinear error, it takes a long time. Therefore, it is very meaningful to develop a phase error correction algorithm with both high accuracy and fast speed.
The N-step PSP has the advantages of fast measurement speed, high accuracy, and a non-contact nature, making it widely used in the phase measurement field. Since the three-step PSP is most easily affected by gamma nonlinearity, we take the three-step PSP as an example to illustrate the principles. According to the calculation formula of nonlinear error in the three-step PSP, it can be deduced that the phase error has periodicity and symmetry within 2π period. Given the periodic characteristic of phase error, we first propose a 1/3 period lookup table (LUT) method and then a 1/6 period lookup table (sLUT) method considering the symmetry of phase error. A standard whiteboard is imaged to calculate the actual phase values using the three-step PSP and theoretical phase values using the twelve-step PSP (Fig. 2). The number of elements for the constructed full-period LUT is 360. According to the periodicity of phase error, a 1/3 period LUT is constructed with 120 elements. Considering the symmetry of phase error, the sLUT with only 60 elements is constructed. The sLUT is simulated and tested on real objects to compare the error correction effects and correction times of the whole-period LUT, 1/3 period LUT, and sLUT methods.
The proposed sLUT method is rigorously evaluated and compared against the conventional full-period LUT approach through simulation testing and experimental validation. The simulation is tested on a standard sphere and peaked surface. The standard deviations (STDs) of the results corrected by whole-period LUT, 1/3 period LUT, and sLUT are calculated. The results show that compared with whole-period LUT, sLUT achieves equivalent performance in terms of error correction. A fringe projection system based on an industrial camera (model: Basler a2A1920-160ucBAS) and digital projector (model: DLP Light-crafter 4500) is used. The camera resolution is 1920 pixel×1200 pixel and the projector resolution is 912 pixel×1140 pixel. Experimental validation is performed on a processing device with CPU (AMD Ryzen 5 5600H), GPU (NVIDIA GeForce RTX 3050Ti Laptop), and 16 GB memory system. The test results show that the maximum difference of STDs between the two methods is only 0.002 rad, indicating that the sLUT achieves equivalent error correction performance as whole-period LUT. A particularly notable aspect is that the sLUT achieved these results using only 60 table elements, representing an 83% reduction over the 360 elements comprising the whole-period LUT. This parameter efficiency allows for faster computation while still enabling high-fidelity nonlinear error modeling. Quantitative analysis shows the average error correction time is reduced from 0.97 s for the whole-period LUT to just 0.12 s for the sLUT (Table 1), showing an approximately 8-fold speed enhancement. In summary, both simulation and physical experimentation provide a strong validation that the proposed sLUT methodology offers correction accuracy on par with whole-period LUT while significantly improving computational efficiency and highlighting its significant advantages for practical phase metrology applications.
We propose a new method for addressing nonlinear phase errors in PSPs, the error correction method based on the sLUT. This method takes into account both the periodicity and symmetry of the phase errors and constructs a lookup table focusing only on the phase errors within a 1/6-period range. Experimental results demonstrate that, while reducing the parameter size of the sLUT by 83%, the same error correction performance as the traditional LUT can be achieved. Additionally, the computation time for error compensation is reduced by a factor of 1/8. The experimental results also indicate that the 1/3-period LUT achieves the highest accuracy in phase correction. This may be attributed to the 1/3-period LUT’s ability to more accurately capture the periodic characteristics of the phase errors during the correction process. Compared to the full-period LUT and sLUT, the parameter values may not fully match the periodicity of the phase errors, resulting in inferior correction performance. However, the optimal parameter size for the sLUT may vary with changing experimental conditions. Therefore, further research is needed to achieve a balance between accuracy and speed by determining the most suitable parameter size for the sLUT.
.The traditional 2D detector-based phase measurement methods are always limited by the specific spectral response range, for which single-pixel wavefront imaging provides a new method. A digital micromirror device (DMD)-based single-pixel common-path interference is established, in which Hadamard basis is employed to modulate the target wavefront and the checkerboard partition on the DMD is done to divide the light field into the signal and reference fractions. Meanwhile, phase image formation is implemented as usual with the mathematical principles of single-pixel imaging and phase-shifting algorithms. The results show that with the four-step phase-shifting, the mean relative error of the calculated focal length is as low as 0.0298% when the phase image resolution is 128 pixel×128 pixel for a lens with a nominal focal length of 1000 mm. This method is characterized by a simple device, low cost, and simple calculation principle. Benefiting from the single-pixel detection advantages, this method is expected to be adopted for wavefront detection of lenses or transparent objects in weak light environments, extreme ultraviolet, and far infrared bands. Further, it expands the application scope of single-pixel wavefront imaging.
According to the single-pixel wavefront imaging theory, a DMD-based single-pixel multi-step phase-shifting common-path interferometer is established, in which the Hadamard basis is utilized to modulate the target wavefront, and the checkerboard partition on the DMD is done to divide the light field into the signal and reference fractions and form the interference. Then, the lens wavefront can be reconstructed using the passively detected coefficients correlated with the Hadamard modulation patterns. Finally, the phase and amplitude of the physical lens can be obtained from the reconstructed complex wavefront. For the wavefront reconstruction, a four/three-step phase-shifting and down-sampling strategy is performed.
Two lenses with focal lengths of 1000 mm and 500 mm are selected as the targets under test. According to the phase detection results, one can gain the truncation phase distribution and the 3D display of the unwrapped phase. The cross-sections of the measured phases agree well with the theoretical values [Figs. 3(c) and 3(f)]. The measured focal lengths of the lenses are 1000.2 mm and 499.5 mm. The relative errors of the focal lengths between the theoretical values and the measurement ones corresponding to the two lenses are 0.02% and -0.10%, which proves the reconstructed results agree well with the theoretical values and further demonstrates the availability of the phase-shifting common-path interferometry for lens phase detection. Next, the influence of different pinhole sizes on the experimental measurement accuracy is demonstrated, as shown in Table 1. Considering the realistic factors, the 20 μm pinhole is finally selected for subsequent experiments.
Then, as a proof-of-concept under low-resolution circumstances, the phase image of the 1000 mm lens with 64 pixel×64 pixel is retrieved by employing the four-step phase-shifting method (Fig. 4). The 1000 mm lens is measured five times continuously at two different resolutions. At the resolution of 128 pixel×128 pixel, the measured focal length results are given in Table 2. According to Table 2, the average focal length is 1000.080 mm, and the mean relative error between the measured value and the nominal value of the focal length is 0.0298%. At the resolution of 64 pixel×64 pixel, the measurement results are shown in Table 3, and the average value of the measured focal length and the mean relative error are 999.422 mm and 0.1603% respectively.
Finally, the experiment of improving the measurement speed is carried out. Under the three-step phase-shifting method, the cross-sections of the measured phases are still consistent with the theoretical values (Fig. 5). The measurement results of the above two lenses are 1000.6 mm and 501.6 mm. Compared to the theoretical values, the relative errors are 0.06% and 0.32%. The lens phase is reconstructed by combining the three-step phase-shifting with the down-sampling strategy. For the 1000 mm optical lens (Fig. 6), the measured focal lengths are 1001.4 mm and 1001.9 mm corresponding to sampling rates of 0.8 and 0.4, leading to relative errors of 0.14% and 0.19%, respectively. Regarding the 500 mm optical lens (Fig. 7), the calculated focal lengths are 503.1 mm and 503.4 mm when the sampling rates are set as 0.8 and 0.4, bringing about relative errors of 0.62% and 0.68%, respectively.
As far as we know, DMD-based common-path interference single-pixel imaging is first successfully employed to detect cemented doublet with different focal lengths. Experimental results show that whether it is 1000 mm or 500 mm optical lens, the measured focal lengths are much closer to the theoretical ones by adopting the four-step phase-shifting algorithm. The influence of image resolution on the measurement results is investigated, which helps conclude that the mean relative error is as low as 0.0298% when the 128 pixel×128 pixel phase image measured by a four-step phase-shifting algorithm is chosen to calculate the focal length. Additionally, by exploiting the down-sampling strategy, the imaging time is shortened further when the three-step phase-shifting algorithm is adopted for phase retrieval. Thus, we currently provide a simple and cost-effective way for lens detection and further advance the single-pixel imaging technology toward practical applications.
.Over the past decade, integrated circuit (IC) technology has experienced remarkable advancements through the use of complex three-dimensional (3D) device structures, including new materials, patterning techniques, and processes that provide higher device performance with reduced feature sizes. These nanoscale 3D structures present significant challenges to the field of measurement. Lithography technology plays an important role in IC manufacturing, and its quality directly affects the yield of products. As an important factor affecting the quality of lithography, the precision requirements of overlay error have become increasingly stringent with the continuous breakthrough of the IC manufacturing process and the continuous reduction of advanced node sizes. According to the rule of thumb, the overlay accuracy should be better than 20%-30% of the critical dimension (CD). Currently, the measurement methods of overlay error are mainly divided into two categories: image-based overlay (IBO) and diffraction-based overlay (DBO). The IBO method is limited by the resolution of the optical microscope with the continuous breakthrough of the node, and the focal length and the laser wavelength need to be adjusted to enhance image contrast. There is no research on the traceability of the DBO method, which may result in large measurement errors. Through-focus scanning optical microscope (TSOM) is a fast, non-destructive, and highly reliable measurement technique. In order to realize the rapid non-destructive detection of the overlay error, a novel method for detecting the overlay error using the TSOM was proposed and explored in detail. This innovative approach aims to enhance the accuracy and efficiency of overlay error measurements, ultimately contributing to the advancement of IC technology.
The sample was placed on a microscope sample stage, which was driven by a piezoelectric transducer (PZT) to facilitate scanning along the Z-axis and through the focus point. During the scanning process, a series of sample patterns at different focus positions were captured by a charge-coupled device (CCD) camera. These images were then stacked based on their spatial positions to obtain a TSOM 3D light field. By intercepting the 3D light field along the Z-axis direction, a TSOM map containing the structural information of the sample was generated. After the TSOM map was processed, the train set and test set were established, and a convolutional neural network (CNN) model was constructed. The mean square error (MSE) loss function and adaptive moment estimation (Adam) optimizer were used to evaluate the prediction performance of the model through the test set. If the evaluation results did not meet the desired criteria, it was necessary to retrain the hyperparameters, such as the optimizer parameters and the number of convolutional network layers, by changing the activation function to determine the final model parameters. The information of the parameters to be tested was extracted by deep learning model prediction.
This method enables accurate prediction of overlay errors. For the model trained by different offset samples, the predicted result curve closely resembles the true value curve (Fig. 6). When the sample offset interval is 200 nm, the mean absolute error (MAE) and root mean square error (RMSE) values are 4.2 nm and 5.3 nm, respectively. As the offset interval decreases, both the MAE and RMSE values decrease linearly (Fig. 7). When the offset interval is reduced to 20 nm, the MAE and RMSE values of the offset prediction results are further reduced to 0.05 nm and 0.12 nm, respectively. This can be explained by the fact that as the offset interval of training samples decreases, the interval of deep learning label value decreases, and measurement resolution is enhanced. The standard deviation (STD) values of the four groups of samples are all below 6 nm and show a significant downward trend with the decrease in the offset interval of samples, which indicates that the offset interval of samples will also affect the repeatability of the prediction results. When the offset interval is reduced to 20 nm, the STD values of the overlay error measurement results are better than 0.083 nm (Fig. 8), and the corresponding repeatability accuracy (3σ) is 0.25 nm. By using a smaller interval of experimental samples, higher measurement accuracy can be achieved.
The measurement of Bar-in-Bar marking offset is realized based on TSOM combined with a deep learning model. Unlike the traditional IBO method, we utilize an optical microscope to capture a series of images at different focal points, generating a TSOM atlas. A convolutional neural network model is then established for training and verification, which saves measurement simulation time and enables the regression prediction of the overlay marking offset. The sample with an offset interval of 20 nm is used for model training. The measurement accuracy is superior to 0.1 nm, and the repeatability accuracy (3σ) is superior to 0.25 nm. The experimental results show that this method is capable of measuring sub-nanometer overlay errors and is suitable for various types of overlay markings. In addition, it has a simple structure and low cost, which serves as a novel measurement method for overlay error measurement.
.The full-chip source-mask optimization (SMO) technique, developed by the collaborative optimization of light sources and masks, plays a crucial role in achieving process nodes of 28 nm and smaller. During full-chip SMO, it is essential to employ graphic selection techniques to identify critical patterns from the mask layout. Based on these critical patterns, an illumination mode suitable for the full-chip mask patterns can be determined. Among the various methods for graphic selection, the spectrum-based approach stands out due to its advantages of not requiring a priori knowledge and offering high reliability. However, existing spectrum-based methods fall short in efficiently selecting the smallest set of critical patterns within the shortest time, leaving room for further improvement in efficiency.
In this paper, we propose a critical pattern selection method based on the breadth-first search algorithm. Based on existing spectrum graphic selection methods, our approach leverages the breadth-first search mechanism to ensure that the nearest leaf nodes to the root node are discovered during the search process. By finding the shortest paths and combining them, we efficiently select all minimal critical pattern groups without traversing the entire critical pattern tree. This approach significantly improves the efficiency of critical pattern selection.
In this paper, ASML's Tachyon Tflex software is used for simulation verification. The simulation employs a set of 60 randomly selected patterns from the 45 nm standard cell library. Under the condition of distinguishing pattern periodicity, our proposed method identifies two minimal critical pattern groups (Fig. 7), whereas Tachyon Tflex software produces only one group (Fig. 8). When various critical metrics are compared under a 10% CD deviation and 5% EL, group A in our method exhibits MEEF and ILS indicators similar to Tachyon Tflex, but with a significantly better depth of focus (DOF) (Table 2). Furthermore, without considering pattern periodicity, our method identifies a total of eight minimal critical pattern groups (Fig. 10), while Tachyon Tflex yields only one group (Fig. 11). In terms of critical metrics under the same CD deviation and EL conditions, G3 in our method outperforms Tachyon Tflex. By focusing on selecting the smallest set of critical patterns, our approach avoids exhaustive searches of the entire pattern tree, resulting in higher efficiency compared to depth-first search-based techniques.
This paper proposes a critical pattern selection method for full-chip SMO based on breadth-first search. Leveraging the principles of breadth-first search, our method efficiently identifies critical patterns for full-chip SMO. Using a test pattern set extracted from the 45 nm standard cell library, we conduct simulation analyses by means of commercial computational lithography software Tachyon Tflex and compare the results with Tachyon Tflex. The simulation results demonstrate that our proposed method achieves a superior process window compared to Tachyon Tflex. By employing depth-first search, we avoid exhaustive searches of the entire pattern tree, ensuring that the first critical pattern group selected contains the fewest patterns. Utilizing breadth-first search, our method rapidly identifies all minimal critical pattern groups, simultaneously minimizing the number of critical patterns while allowing for comparative analysis to select critical pattern groups with larger process windows.
.The interaction between laser pulses and materials has been extensively studied in recent decades as a common physical mechanism, such as laser propulsion (LP) and laser-induced breakdown spectroscopy. LP has gained widespread attention due to its inherent advantages of reducing launch costs and increasing payload. With the development of LP, the research field has gradually transitioned from macroscopic to microscopic fields. However, during the propulsion process, the direct irradiation of high-energy laser pulses on particles can cause permanent damage to the particle surface, and the large laser spot size can lead to deviations in the particle’s trajectory. Therefore, a device that can control the spot diameter and reduce surface damage to particles needs to be proposed. In this work, we propose LP based on a tapered fiber to realize the propulsion of microscale microspheres and analyze the mechanism of LP based on the motion of microspheres. We study the effects of laser energy and microsphere size on the distance of the microsphere. In addition, we analyze the influence of laser energy emitted from the fiber tip on the fiber tip size and discuss the relationship between laser energy density and fiber tip diameter, revealing the nonlinear increase in laser energy and the decrease in scattering loss as the fiber diameter increases. Our research may provide further support for the precise manipulation of colloids and biomaterials at the micrometer level.
1) Experimental setup for LP. A tapered fiber structure is prepared through flame heating. A Nd∶YAG laser is coupled into the fiber using a 40× objective lens and emitted from the fiber tip. The tip and microspheres are placed on three-dimensional translation stages. By the combination of vertical CCD1 and horizontal CCD2, the driven microspheres can be flexibly and precisely controlled. The dynamics of the microspheres are captured by the CCD1 camera. After the propulsion experiment is completed, the laser energy emitted from the fiber tip is measured by an energy meter. 2) Formation of plasma shock wave. During the interaction between the laser emitted from the fiber tip and the atoms, the electrons in the atoms are excited or transitioned to higher energy states. These high-energy electrons are accelerated and further excite electrons in the atoms. The high-energy electrons collide with other atoms, generating additional electrons. When the number of electrons reaches a certain number (∼1016 /cm3), a high-temperature and high-pressure plasma is formed. Subsequently, the shock wave generated by the expansion of the plasma propels the microsphere forward through the recoil effect. 3) Calculation of microsphere movement distance and velocity. The dynamics of the microspheres are recorded by a CCD1 camera with a frame rate of 1000 frame/s. The time interval between adjacent images is 1/1000 s. By analyzing the movement of the microsphere between two images taken at t=1/1000 s as the initial state, the displacement s within the time range of 0-1/1000 s is determined. Then the average velocity v=s/t is calculated. We consider the average velocity as the initial velocity due to the short interaction time between the laser pulse and the microsphere. To reduce experimental errors, the experiment is repeated three times under the same conditions.
In the experiment of propelling microspheres with a diameter of ~80 μm using a laser with an energy of ~9.6 μJ through a ~8 μm fiber tip, the microsphere moves a distance of 547 μm within a time range of 6/1000 s. The maximum velocity is calculated to be 12.4 cm/s, and the momentum is determined to be P=8.3×10-11 Ns. The calculated value P differs from the theoretical value PM by three orders of magnitude. By adjusting the relative position between the fiber tip and the microsphere, we observe that the microsphere can move in the direction of the fiber as well as diagonally. These findings indicate that the ejection mechanism of shock waves plays a dominant role in the propulsion of the microspheres (Fig. 2). In the qualitative study of the effects of laser energy and microsphere size on microsphere movement, we find that the movement distance of the microsphere increases with increasing laser energy. It can be explained that with the increase in laser energy, the energy carried by the shock wave formed by the expansion of plasma increases, resulting in a greater force exerted on the surface of the microsphere. On the other hand, as the size of the microsphere increases, the movement distance decreases. This can be attributed to the increased resistance between the microsphere and the substrate surface due to the larger size. The above experimental results further illustrate the propagation characteristics of shock wave (Fig. 3). After investigating the relationship between laser energy and fiber tip diameter [Fig. 5(b)], we discover that the laser energy emitted from the fiber tip exhibites nonlinear increases, which is attributed to declining scattering loss with increasing fiber diameter. The calculated limit of the output energy density at the fiber tip is ~1.15 μJ/μm2. For a fiber tip diameter of approximately 2 μm, the energy density is ~1.25 μJ/μm2 [Fig. 5(c)], indicating that the fiber tip has been damaged.
We present a straightforward solution that makes LP of microspheres feasible using a tapered fiber structure. In the experiment, a laser with an energy of ~9.6 μJ is emitted from the fiber tip, driving the movement of a ~80 μm diameter microsphere. Within a time range of 6/1000 s, the microsphere moves a distance of 547 μm. The fact that PM
The distributed side-coupled cladding-pumped (DSCCP) fiber comprises an active signal fiber with gain characteristics and several passive multimode pump fibers, collectively coated to form an integral package. The pump laser injected into the pump fiber couples between the pump and signal fibers in the form of an evanescent wave. Upon entering the signal fiber's cladding, it excites rare-earth ions in the signal fiber's core, thus achieving laser gain amplification. In the realm of high-power single-fiber lasers, the primary challenges limiting power enhancement are pump injection and extremely high thermal loads. Therefore, combining cascaded pumping and distributed side-pumping has emerged as a promising and feasible pathway to achieve ultra-high power in the tens of kilowatts range.
The experimental setup is based on a master oscillator power amplifier scheme. A pair of fiber Bragg gratings and 20/400 μm ytterbium-doped fiber form an optical cavity to generate a hundred-watt seed. A homemade 35 m (1+1) DSCCP fiber with high-concentration Yb-doped in active core and a core/cladding size of 60/300 μm is activated by five groups of 1018 nm pump sources in a counter way from pump core with a core size of 310 μm. A Raman suppression array, consisting of a few homemade tilted fiber Bragg gratings, is placed between the oscillator and the amplifier to filter noise within the Raman range.
The experimental results demonstrate the highest output power of 20.13 kW from the signal fiber, with an optical-optical conversion efficiency of 81.0%. The fiber slope efficiency, fitted across the entire power range, reaches 82.3%. Spectral measurements exhibit a 3 dB linewidth of 0.44 nm for the seed laser at the hundred-watt level, expanding to 1.1 nm at the amplified power of 20.13 kW. The experiment also reveals a Raman suppression ratio of approximately 37.65 dB, indicating effective suppression of stimulated Raman scattering components in the spectrum.
This achievement represents the first publicized report of a 20 kW single-fiber laser output using the (1+1) type distributed side-pumping approach. The success not only highlights the efficacy of the distributed side-pumping scheme for realizing high-power outputs but also paves the way for future research on further improving beam quality and achieving high-quality laser outputs approaching diffraction limits in the tens of kilowatts range. In the next stage, we will focus on the improvement of beam quality by controlling core design and enhancing coupling ability.
.In recent years, the use of low-cost vision sensors to achieve navigation and positioning has received more and more attention. As vision sensors have high measurement accuracy, wide range, rich information, and non-contact, flexible, portable, and low-cost characteristics, they can achieve large-scale multi-target tracking and complete positioning tasks in complex and limited industrial field environments. We study an indoor visual positioning system based on camera and QR code. Firstly, the effective recognition range of the QR code beacon is analyzed, and the calculation formula of recognition range based on marker size, camera definition, and other parameters is derived. Based on this formula, the layout of the QR code beacon in the positioning scene is designed, and the system positioning is realized by the perspective n points (PnP) calibration algorithm. Finally, the validity of the QR code recognition range is verified by experiments.
We conduct the following research based on the existing perspective four points (P4P) QR code location algorithm: 1) We define the recognition range of the QR code and deduce the recognition range calculation formula according to the recognition algorithm accuracy, QR code size, camera resolution, and camera field of view (FOV). 2) According to the definition and calculation of the recognition range of the QR code, we design the QR code beacon deployment scheme of the target scene, realize a large range of positioning and recognition range coverage with fewer QR codes, improve the recognition rate, and ensure the accuracy of the positioning algorithm. 3) We analyze the positioning effect of the system under the fixed and mobile states of the camera position, calculate the positioning accuracy and positioning recognition rate of the system under different conditions, and verify the theoretical recognition range and positioning recognition rate.
The actual test environment is a room of 7 m×5 m×3 m (Fig. 6), and the relevant experimental parameters are shown in Table 1. To ensure the overall accuracy of the positioning system and the success rate of positioning, the spacing of the QR code beacon is reduced during the actual deployment, and the spacing is set to 2 m. According to the situation of the room, the space rectangular coordinate system is established, and four positions are marked in Table 1 to deploy QR codes so that the identification range can cover the whole room. To verify the effectiveness of the identification range algorithm and deployment scheme, we design two experiments. Experiment 1: To verify the positioning accuracy of fixed positions within the recognition range, we carry out positioning accuracy tests at different positions within the recognition range of four QR codes. The test results are shown in Fig. 7. After testing, the error of the QR code located at the edge of the identification range is slightly larger than that of the center of the identification range. The positioning error near the right below the QR code is less than 6 cm, and the positioning error near the edge of the identification range is less than 10 cm. The overall average positioning error is 8.32 cm, which is basically consistent with the positioning error of the algorithm theory. The positioning accuracy within the recognition range of the QR code is not affected. Experiment 2: The recognition rate is tested in the positioning scene (Fig. 8). Raspberry PI 3B is utilized to build the robot platform, and the camera is deployed on the robot to make the robot move around the room at a constant speed of 0.33 m/s along a straight or circular route. During the process, the positioning data is collected at a constant time interval and the number of successful positioning is calculated. In the experiment, the positioning program counts the number of successful positioning. Whenever the program successfully identifies the QR code and outputs the positioning result, and the positioning position deviated from the actual position or route is no more than 15 cm, it is regarded as a successful positioning, and the deviation distance from the route is regarded as the positioning error. The test results (Table 2) show that when the robot moves along a straight line or a ring route, the recognition rates of the QR code are 92.31% and 91.59%, respectively. Within the recognition range of the QR code, the positioning recognition rate meets the requirements. At the same time, the cumulative distribution function curve of positioning error in the fixed position and the moving process is shown in Fig. 9. It can be seen from Fig. 9 that when the robot moves, the positioning error distribution curve of the system moves better in a straight line than that in a circular motion. In addition, the error of the two methods has little change compared with the average positioning error of the fixed position, and the error of 90% positioning results is less than 9 cm. It shows that the positioning accuracy is basically not affected when the robot moves within the QR code recognition range, and the QR code beacon deployment scheme designed in the experiment meets the requirements of positioning accuracy and positioning success rate.
We study the recognition range of indoor visual positioning system based on the QR code and the deployment scheme of QR code beacons. To improve the deployment efficiency of the QR code and the coverage range of the positioning system, we first define the recognition range of the QR code and derive the calculation formula of the positioning recognition range according to the performance of the QR code recognition algorithm, marker size, camera definition, and other parameters. Then, the deployment strategy of QR code beacons is given for the positioning scenario, and the validity of the QR code positioning recognition range and beacon deployment scheme is verified by experiments. The results show that in the recognition range of the QR code, the average positioning error of fixed position is 8.32 cm. In the positioning scenario of QR code beacons, the positioning system is deployed on the robot, and the system is in linear and circular motions. The recognition rates of the QR code are 92.31% and 91.59%, respectively, which meets the positioning coverage requirements, and the positioning accuracy is almost consistent with the average positioning error of fixed position. Our QR code beacon deployment strategy has a good effect verified by experimental tests and improves the positioning efficiency and system reliability of the QR code indoor positioning algorithm based on P4P.
.Pressure sensitive paint (PSP) technology is a non-contact optical pressure measurement method utilized extensively for surface pressure measurement of parts in wind tunnel environments. The surface of a part coated with PSP fluoresces under excited light conditions, and the pressure results can be inverted using the Stern-Volmer formula. This formula requires the ratio of the windy image to the windless image, but the displacement and non-rigid deformation of the part in the wind tunnel environment will result in computational errors in the division of non-corresponding points. Consequently, the accurate registration of windy and windless images is fundamental to processing PSP experimental data. Typical PSP images comprise only two distinct components: a bright light-emitting region and a black background region, leading to sparse image features and a relatively limited number of feature points, which makes it difficult to apply typical registration methods directly. Moreover, as the number of images in a single experiment exceeds tens of thousands, conventional non-rigid registration methods are often slow and insufficient for fast registration requirements. Consequently, there is an urgent demand to develop a new method that can register images accurately and swiftly without relying on marker points.
To achieve the demand for accurate and fast registration of PSP images, we propose a registration method based on unsupervised learning. The method does not require a priori information and directly learns an end-to-end from image pairs to deformation fields. The registration network structure incorporates multiple scales of structures, through a multi-cascade approach, facilitating a coarse-to-fine registration of PSP images. Furthermore, we have designed a new loss function based on the structural similarity of images, which maximizes the similarity between the registration image and the input fixed image. In our study, two sets of PSP experimental images of typical parts, each comprising 20000 image pairs, are introduced for the training and testing of the registration network. The image sizes are 640×272, and the format is a 16 bit greyscale image. The registration process utilizes only image pair information, without the assistance of external supervisory information. To assess the efficacy of this method, we compare and analyze it with conventional algorithms currently used for PSP image alignment. These conventional algorithms include feature-based matching algorithms, MI-Bspline algorithms that combine greyscale and B-spline, and deep learning-based alignment models such as Voxelmorph, CycleMorph, BIRGU-Net, and LRN. The image registration methods are also evaluated based on registration accuracy and time. Registration accuracy is measured by three common quantitative metrics: root mean square error (RMSE), normalized correlation (NCC), and target alignment error (TRE).
Evaluating and analyzing our method with the conventional methods, it is evident from the registration results of shell parts and thin plate parts in Figs. 6 and 7 that the five regions of the two sets of experimental data in the PIR-Net registration results have essentially completed the registration. This suggests that the method possesses a stronger robustness in handling complex scenes and large deformation alignments in PSP images and enhancing accuracy. To further quantify the accuracy of the registration, we utilize the RMSE and NCC indices to evaluate the results (Tables 1 and 2). The tables indicate that the PIR-Net significantly outperforms the comparison methods in both the RMSE and NCC metrics. Compared to the conventional methods, the RMSE index is improved by 51.6% and the NCC index is improved by 181.7%. This improvement is primarily attributed to the non-rigid deformation and feature sparsity in the PSP image. Neither the feature matching nor the iterative optimization-based methods can effectively address these issues, leading to sub-optimal overall registration. Compared to other deep learning-based registration methods, PIR-Net demonstrates superior adaptability in large deformation regions due to its multi-scale network structure and attention mechanism. This results in a 16.4% improvement in the NCC and a 19.1% improvement in the RMSE. To further illustrate the advantages of the algorithm in registration error control, we compare and demonstrate the maximum error position and the average error in the experiments. Due to the combination of the smoothing term constraint and the attention mechanism, it exhibits a more consistent distribution of error, with a relatively smooth error limit constraint (Figs. 10 and 11). The average time for each method’s registration is experimentally demonstrated in Table 5. Our method outperforms other conventional methods. Compared to other deep learning methods, the registration time of PIR-Net is slightly longer. This is primarily due to the use of multi-scale registration. However, using a very small difference in registration time for a higher accuracy of registration is a good compromise between time performance and accuracy of registration, which is more practical.
We introduce an unsupervised learning-based registration method for the PSP image registration. This method directly learns the end-to-end mapping from image pairs to deformation fields. It designs a multiscale network structure and a coarse-to-fine registration strategy to address the issue of large offsets and non-rigid deformations in wind tunnel environments. Additionally, it incorporates a novel loss function paradigm based on image similarity, which enhances image registration in feature-sparse scenarios. Comparing with typical alignment methods such as MI-Spline and Voxelmorph on two sets of PSP images, the experimental results prove that our method achieves a far better registration performance than the existing methods in visual evaluation and RMSE, NCC, and TRE metrics, under the premise of ensuring the performance of the algorithm. This method provides a reliable solution to the PSP image registration problem.
.In the rapidly evolving domain of warehouse logistics, the deployment of automated guided vehicles (AGVs) with advanced navigation capabilities is becoming increasingly essential. This research is driven by the need to address significant challenges in existing laser-inertial navigation systems used in warehouse environments. These challenges include susceptibility to inertial bias drift, compromised real-time performance, and reduced pose estimation accuracy, particularly in areas with repetitive structures or dynamic environmental changes. The study aims to not only enhance the operational efficiency of AGVs but also significantly contribute to the broader field of industrial automation and intelligent robotics systems. By improving the precision and reliability of AGV navigation, the research endeavors to optimize warehouse operations, reduce operational costs, and increase throughput. This objective is critical in addressing the limitations of current navigation systems and ensuring the adaptability and effectiveness of AGVs in complex warehouse settings, thereby contributing to the evolution of automated logistics and enhancing overall supply chain management.
A comprehensive methodology was developed to enhance AGV navigation in warehouse environments, integrating a multimodal fusion of laser light detection and ranging (LiDAR), inertial measurement unit (IMU), and quick response (QR) code technologies. This fusion approach was meticulously engineered to synergistically combine the unique strengths of each sensing modality, thereby overcoming the inherent limitations of traditional laser-inertial navigation systems.
In the warehouse setting, QR codes were strategically affixed to the floor at intervals of 1200 mm. When an AGV scanned a QR code, the system received precise positional and angular information, providing an essential absolute reference for recalibrating the AGV's navigational state. Furthermore, IMUs were uniquely calibrated using QR code data to compensate for inertial bias drift, significantly enhancing inertial measurement accuracy. In addition to considering inertial residuals, a reprojection error between the 3D point qat position xq and frame i was defined, incorporating error analysis from the downward-facing sensor for QR codes on top of the laser reprojection error.
According to the bundle adjustment for LiDAR mapping (BALM) algorithm, an innovative layered local bundle adjustment (BA) optimization process integrated with QR code data was introduced. This process streamlined the BA procedure, markedly reducing computational load and optimization time. The optimization process was structured from the bottom layer to the top, with each layer consisting of a set number of LiDAR frames. Keyframes within these layers, particularly those identified through QR code scans, were used to construct a more precise and consistent global trajectory for the AGV. During the layered BA optimization, specific keyframes within each window were maintained without participating in the BA optimization. Following this layered optimization, a top-down pose graph optimization was implemented, crucial for minimizing cumulative pose estimation errors that might have propagated through the bottom-up optimization process. This phase of the optimization considered common features within each window of frames, particularly focusing on frames associated with QR code scans. The fixed positions from QR code scans ensured high confidence in pose estimates, significantly enhancing the overall accuracy of the navigation system. This dual optimization process effectively addressed scale drift and time-consuming issues commonly encountered in incremental mapping methods, ensuring a more accurate and efficient navigation system for AGVs. The integration of QR code data not only provided high positional accuracy but also contributed to the robustness and reliability of the AGV navigation system in complex warehouse environments.
In our research, we address the challenge of inertial bias drift by proposing an IMU pre-integration model integrated with QR code data. This model utilizes the rigid constraint information provided by QR codes to update inertial biases. By considering inertial residuals and jointly optimizing the errors from laser-inertial and downward-facing camera systems, we establish a robust initial state estimation using the absolute pose derived from the QR codes captured by the downward-facing camera. This approach ensures a solid starting point for the joint optimization, accelerating the convergence speed, and enhancing the accuracy of the estimates. Experimental validations have been conducted on linear and rectangular trajectories. The performance of our method is compared with open-source algorithms such as LeGO-LOAM, BALM, LIO-SAM, and LIC-Fusion2. Notably, as the trajectory length increases from 24000 mm to 60000 mm, the absolute translational and rotational errors of our method only grow by approximately 2 mm and 0.5°, respectively. This represents a 1-4 times improvement in overall positioning accuracy (Table 2 and Table 3).
To address the issue of real-time performance, we propose a globally consistent optimization model, selectively incorporating keyframes and QR codes to execute a layered local BA optimization from the bottom layer to the top. This process significantly enhances the consistency and precision of LiDAR mapping and AGV positioning. During the layered optimization process, the pose of specific keyframes (derived from QR code solutions) is maintained constant and not involved in the optimization process, ensuring accuracy while significantly reducing optimization time. In our experimental setup within a warehouse logistics environment, our algorithm demonstrates a substantial improvement in time efficiency, outperforming LeGO-LOAM, BALM, LIO-SAM, and LIC-Fusion2 by 49.40%, 20.03%, 19.95%, and 37.29%, respectively (Table 4).Finally, leveraging factor graph optimization, we propose a globally consistent navigation framework that fuses laser-inertial and QR code data. This framework integrates pre-integration factors, tracking factors, loop closure factors, and QR code factors into the factor graph model, realizing multi-level data fusion. This approach effectively reduces cumulative errors and provides a globally consistent AGV navigation outcome (Fig. 4). This innovative navigation system represents a significant advancement in AGV technology, offering enhanced accuracy, efficiency, and consistency in complex warehouse environments (Fig. 4).
To address the challenges inherent in laser-inertial-based navigation methods in warehouse logistics environments, such as inertial bias drift, poor real-time performance, and low pose estimation accuracy in degraded scenarios, we present a precise laser-inertial-QR fusion navigation method for autonomous and accurate AGV navigation in warehouse logistics settings. By integrating the IMU pre-integration model with QR data and employing a globally consistent optimization approach, we successfully estimate and correct inertial biases while reducing optimization time. The tight coupling of LiDAR, IMU, and barcode data facilitates multi-level data fusion, significantly enhancing positioning accuracy and robustness. The method has been extensively compared with leading laser-inertial navigation methods on a developed navigation platform. Experimental results demonstrate the superior time efficiency and reduced pose errors of the algorithm that maintains translational and rotational errors below 0.02 m and 2°, respectively, regardless of the trajectory length.
Future research will explore deeper multi-sensor fusion by integrating visual sensors to further enhance navigational accuracy. This includes capturing feature points using high-precision cameras and synergistically optimizing them with laser and IMU data using visual SLAM techniques, thereby strengthening system performance in variable lighting conditions or feature-deprived scenarios. Additionally, the development of a new real-time adaptive calibration method within the multi-sensor fusion algorithm is considered. This method aims to utilize real-time sensor data for continuous adjustment of sensor model parameters. The key lies in employing advanced filtering techniques, such as Kalman filters or particle filters, to estimate and correct sensor errors in real time, potentially achieving significant improvement in system accuracy and reliability.
.As a third-generation novel semiconductor material emerging alongside SiC, GaN has become a hot topic in the fields of high-temperature and high-power microwave devices, laser devices, and optoelectronic devices due to its excellent characteristics. It has been widely used in microwave communication, lasers, detectors, and ultraviolet light-emitting diodes. Doping, as a new paradigm for material modification, can directly and effectively control and improve the thermoelectric, photoelectric, and magnetic properties of materials, giving them new characteristics and extending their applications. Materials based on rare earth elements have excellent optical, electrical, magnetic, and catalytic properties, and are the foundation for building various new functional materials. It is expected that rare earth element doping can improve GaN’s visible light absorption. We study the electronic structures and optical properties of GaN doped with different concentrations (atomic number fraction) of Lu using the first-principles plane wave ultrasoft pseudopotential method. The calculation results provide theoretical support for the development of device applications of GaN semiconductor photoelectric materials doped with rare earth element Lu.
We adopt the CASTEP software package using the first-principles calculation method based on the density functional theory. We utilize the projected augmented wave method as the pseudo potential and apply the generalized gradient approximation function proposed by Perdew-Burke-Ernzerhof to express the exchange correlation interaction. We adopt the plane wave expansion with a cut-off energy of 450 eV and leverage the conjugate gradient method to optimize the lattice constants and atom positions of the diverse models. The K-point grid in the Monkhorst-Pack form is set as 4×4×2 for bulk models. As the GGA method underestimates the band gap value of materials, we use the GGA+U plane wave pseudopotential method to correct the band gap. A supercell model with 2×2×2 is built, including 16 Ga atoms and 16 N atoms, with a total of 32 atoms. To make the calculation results more accurate, we conduct a truncation energy convergence test on the supercell systems. Considering the symmetry of GaN crystal, we study the stability of different spatial ordered configurations with the same doping amount at concentrations of 12.5% and 18.75%.
From the formation energy (Table 1), it can be seen that except for the Ga0.9375Lu0.0625N system, which has a higher formation energy, the values of the formation energy and binding energy of other doping systems are all negative, indicating that doping enhances the structural stability of intrinsic GaN. The formation energy of the Ga0.9375Lu0.0625N system has a positive value, making it more challenging to achieve doping compared to other doping concentrations. Under different concentrations, the Lu-doped GaN systems show direct band gap P-type semiconductor characteristics (Fig. 4), and the band gaps are all narrowed. The reduction of band gap is beneficial for electron transition, thereby improving the optical properties of the GaN system. The absorption edges of Lu-doped GaN under four concentrations show a red shift phenomenon (Fig. 7), indicating an improvement in light response capability. The intrinsic GaN has a small absorption coefficient in the visible light range and has low utilization of visible light. When the doping concentration of Lu is 25%, the Ga0.75Lu0.25N system forms a wider visible light absorption region.
We calculate the electronic structures and optical properties of intrinsic GaN and Lu-doped Ga1-xLuxN (x=0.0625, 0.125, 0.1875, 0.25) at different doping concentrations using the first-principles plane wave ultrasoft pseudopotential method under density functional theory. In addition, we study the stability of the same doping and different spatially ordered occupancy architectures when the Lu doping concentration is 12.5% and 18.75%. The calculation results show that the values of lattice parameters of the Lu-doped GaN are increased, and the band gap values of the doped GaN are reduced compared to the intrinsic band gap (3.40 eV) due to the shallow energy level impurities induced by the doping of Lu. Compared with the intrinsic GaN, the static dielectric constants of the Lu-doped GaN increase and even reach 5.42 when the doping concentration of Lu is 25%. The imaginary parts of the dielectric function and the absorption spectrum of the Lu-doped GaN shift in the low-energy direction. The red-shift phenomenon occurs which extends the absorption spectral range and enhances the photocatalytic performance of GaN.
.The specific detection of DNA sequences plays a vital role in disease diagnosis, drug development, environmental protection, and other fields. Common DNA detection methods include electrochemical detection, semiconductor detection, and optical detection. Although the electrochemical detection method features high precision and strong practicability, it has the disadvantages of high cost and complicated detection processes. Semiconductor detection methods can detect reaction changes in real time, but the experimental operation is difficult and has strict requirements for experimental volume. Optical fiber biosensors based on the whispering gallery mode (WGM) effect have been extensively studied due to their small size, high detection accuracy, and fast response. For the measurement requirements of in-situ DNA sensing, we propose an optical fiber optofluidic sensor based on hollow-core fiber WGM.
The proposed sensor is prepared by coupling a tapered optical fiber and a hollow-core fiber. The sensor mainly employs the evanescent field generated by the tapered optical fiber to excite the hollow-core fiber resonant cavity to generate WGM for detection. From the perspective of the sensor composition, the diameter of the tapered fiber, the thickness of the resonant cavity, and the coupling distance between the tapered fiber and the resonant cavity will all influence the experimental detection results. Meanwhile, we explore the proposed DNA sensor from both simulation and experiment aspects. By conducting simulation analysis via Comsol software, we first obtain how the above three factors affect the experimental results. The experiment is completed under the guidance of the simulation. Additionally, we adopt the hollow-core fiber as the resonant cavity and the internal air hole of the hollow-core fiber as the microfluidic channel. In the experiment, the silanization method is utilized to immobilize probe DNA (pDNA) on the surface of the resonant cavity for subsequent detection of complementary DNA (cDNA).
The simulation experiments can help draw the following three conclusions. The smaller diameter of the tapered zone of the tapered fiber leads to a stronger evanescent field generated by the tapered fiber. When the thickness of the resonant cavity continues to decrease, the higher electric field in the resonant cavity brings stronger generated WGM. Under the constant diameter of the tapered fiber and thickness of the resonant cavity, the WGM phenomenon will occur when the coupling spacing between the tapered fiber and the resonant cavity is reduced to a certain distance. The experimental results show that reducing the thickness of the resonant cavity can improve the sensor sensitivity. When the thickness of the resonator is 4.5 μm, the refractive index sensitivity is 141 nm/RIU. The simulation results indicate that reducing the thickness of the resonant cavity can increase the sensor sensitivity. Meanwhile, we leverage hydrofluoric acid to corrode the thickness of the resonant cavity. When the thickness of the resonator is 2 μm, the refractive index sensitivity can reach 206 nm/RIU, 1.4 times higher than that of the resonator with a thickness of 4.5 μm. The thickness of the resonant cavity less than 2 μm is not suitable for practical experiments. The thickness of the hollow-core fiber during the preparation process is not completely uniform. When the thickness of the resonant cavity corroded by hydrofluoric acid is less than 2 μm, it is easy to corrode the thinner parts of the resonant cavity. Additionally, to achieve the introduction of the liquid to be measured, we employ a pump to transport the liquid, which will generate pressure on the resonant cavity and cause the microfluidic channel to rupture under the thin thickness. The resonant cavity with a thickness of 2 μm meets the requirements for detecting local refractive index changes that occur on the optical fiber surface due to the hybridization of DNA molecules. The specific detection of complementary DNA can be achieved by immobilizing the pDNA inside the microfluidic channel of the hollow-core fiber resonator. We prepare cDNA solutions with five concentrations of 10 nmol/L, 50 nmol/L, 100 nmol/L, 200 nmol/L, and 1 μmol/L for concentration gradient detection. When the concentration is 10-100 nmol/L, since the amount of pDNA on the inner surface of the resonant cavity is large enough, linear changes occur with the increasing cDNA concentration. When the concentration of cDNA is 200 nmol/L, the remaining pDNA cannot completely bind to cDNA and therefore cannot change linearly. When the concentration of cDNA continues to rise, there is no excess pDNA on the fiber surface, the sensor reaches a saturated state, and the spectrum no longer shifts. Therefore, the proposed sensor has a linear detection range of 10-100 nmol/L, a sensitivity of 0.56 pm/(nmol/L), and a linearity of 0.994, and it has sound stability and selectivity.
In this paper, we proposed and exhibited a high-sensitivity optical fiber DNA sensor. The WGM fiber probe was fabricated by embedding a corroded hollow core fiber into the tapered fiber structure, and the WGMs could be excited through the efficient coupling between the thin hollow core fiber and the tapered fiber. WGMs in the resonator are excited by evanescent coupling using the tapered fiber with 1.2 μm waist diameter. The combination of proposed DNA and complementary DNA will increase the effective refractive index of the microtubule change, and result in transmission spectrum change finally. Studies of the DNA response sensitivity, stability, and selectivity dependence of the proposed sensor are carried out. The sensitivity achieved in our experiments was 0.56 pm/(nmol/L) in the DNA range from 10 nmol/L to 100 nmol/L. Our DNA sensor based on the WGM effect has the advantage of label-free detection, laying the foundation for the applications of in-situ DNA detection in medical diagnosis and prognosis.
.In recent years, since heart rate is one of the most important indicators of cardiovascular health, non-contact heart rate measurement methods are highly attractive and popular in daily life. Non-contact imaging photoplethysmography (IPPG) has caught much attention from biomedical researchers due to its non-invasive properties without the need for high-performance hardware devices. However, during non-contact imaging where subjects are less constrained, IPPG measurement results are susceptible to interference from rigid and non-rigid movements such as head turning, smiling, speaking and eyebrow raising, and unstable lighting. For improving the IPPG technique, we propose a region of interest (ROI) selection method with a concave lens deformation algorithm and skin color pixel clustering, and an adaptive normalized least mean square (NLMS) filtering algorithm for blood volume pulse (BVP). The proposed method improves accurate ROI extraction in less constrained conditions and the performance of filtering out non-physiological signal intensity fluctuations in ROI. Meanwhile, it has advantages in accuracy and stability under motion scenes and environments with large illumination variations, holding potential significance for non-contact heart rate monitoring in telemedicine, indoor fitness, psychological testing, and unmanned vehicles.
We obtain the subjects' heart rates by processing the facial video images. First, the facial skin color region is distorted and expanded by adopting the concave lens deformation algorithm to increase the percentage of the skin pixel region. Next, the K-means++ clustering algorithm selects skin pixels again and builds RGB channels to estimate BVP signals. Subsequently, the chrominance-based color space projection decomposition (CHROM) algorithm is applied to pre-denoise the above-mentioned BVP signal. Finally, the proposed adaptive NLMS algorithm is employed to filter out the interference of background light, and then measure heart rate by spectrum analysis. In subsequent experiments, ablation experiments are conducted on the UBFC-rPPG dataset to verify that the improved ROI dynamic extraction method can enhance the accuracy of heart rate detection. In comparison experiments, on the same dataset, the results prove that the proposed method possesses stronger robustness to color signal fluctuations caused by the subject's head movements and facial expressions. Additionally, the results of the lighting fluctuation experiment where the light intensity of the double-arm lamp is continuously adjusted to simulate the changing light scene demonstrate the feasibility and effectiveness of the proposed method.
In ablation experiments, the mean absolute error (MAE) of the improved ROI extraction method with a concave lens deformation algorithm and clustering algorithm amounts to 4.29 beats per minute (min-1), and the standard deviation (SD) is 2.59 min-1, with the mean absolute percentage error (MAPE) of 4.19% and the Pearson correlation coefficient rof 0.66. Our improved ROI selection method achieves the optimum in all the above-mentioned indexes. Integrated with the concave lens deformation algorithm and clustering algorithm, the proposed improved ROI dynamic extraction method can improve the accuracy of heart rate detection in less-constrained conditions (Table 1). The MAE of the proposed method is 0.92 min-1, MAPE is 1.57%, SD is 2.43 min-1, and r is 0.65 for the comparison experiments in motion scenarios, which is better than other unsupervised methods. Compared to the supervised learning methods, our method has advantages with low MAE and SD without the necessity for pre-learning and training (Table 2). Additionally, our proposed method has smaller confidence intervals, which means that the study is more robust to color signal fluctuations induced by head movements and facial expressions of the subjects (Fig. 5). In the experiments with drastic lighting changes, the proposed method still possesses smaller MAE, MAPE, and SD than others. The proposed adaptive NLMS method has been proven to be significantly feasible and effective in scenarios with varying lighting conditions (Table 3). By conducting Bland-Altman analysis, the bias of our proposed method is minimal with 95% confidence interval in the -7.8 min-1 to 7.8 min-1 (Fig. 6). Obviously, it indicates that our method is more robust in removing non-physiological signal fluctuations caused by illumination fluctuations.
To deal with the interference caused by normal physiological motion and ambient light in the IPPG technique, we propose an ROI dynamic extraction method integrated with the concave lens deformation algorithm, K-means++ clustering algorithm, and an adaptive NLMS algorithm on the BVP signals to improve the heart rate measuring stability and accuracy of this technique. Firstly, the concave lens deformation algorithm is adopted to compress facial features in each image frame, which in turn increases the pixel area of the facial skin ROI. Secondly, the K-means++ clustering method is employed to resieve the facial skin regions, build the ROI rich in physiological signals, and generate BVP signals with high signal-to-noise ratios. Thirdly, the CHROM algorithm is utilized to filter out the lighting interference caused by normal physiological motion, such as head movements and facial expressions, and further obtain first-filtered BVP signals. Fourthly, the adaptive NLMS algorithm based on the mean value of the first-filtered BVP signal is introduced for adaptively filtering out the non-physiological signals caused by illumination changes from this BVP signal. Finally, to verify the feasibility and effectiveness of our method, we carry out the ablation experiments and comparison experiments between different algorithms on the UBFC-rPPG dataset and our dataset respectively. The results demonstrate that our proposed method outperforms several popular methods in the IPPG technique and solves the difficulty of accurate heart rate measurement under scenarios with large disturbances.
.The vertical-cavity surface-emitting laser (VCSEL) is a typical semiconductor laser widely employed in high-speed optical communication, optical sensors, pumped solid-state lasers or fiber lasers, LiDAR, and structured light applications. The irradiance distributions of the VCSELs typically conform to a Gaussian distribution. In various application scenarios such as laser processing and laser illumination, it is necessary to achieve a uniform irradiance distribution on the target plane. Among various methods for laser shaping, freeform optical components have gained increasing popularity due to their high optical efficiency and flexibility in controlling light distribution. However, there is a paucity of literature regarding the utilization of freeform optical elements for shaping VCSEL lasers, particularly in terms of designing freeform surfaces to manipulate irradiance distribution for VCSEL array modules. In this paper, we present the design of both a freeform lens and a freeform lens array specifically and tailor the light distributions for single VCSEL sources and VCSEL arrays respectively to achieve a uniform irradiance distribution on the target plane.
The design of a freeform shaping lens for VCSELs aims to achieve a uniform irradiance distribution on the target surface for the output beam. The front surface of the lens is aspherical, while the back surface is freeform. ?The rays emitted by the VCSEL are collimated through the aspherical surface, and the collimated beam is then incident on the freeform surface, through which the irradiance distribution is regulated to produce a uniform irradiance distribution on the target plane. During the design process, a virtual surface is incorporated within the middle region of the freeform lens to establish a relationship between its energy distribution and that of the target plane. This enables the determination of direction vectors of incident and outgoing rays, as well as obtaining normal vectors for each point on the sample points of the free-form surface. By formulating a Poisson equation relating sag of freeform surface and normal vector, we employ the discrete cosine transform method to solve it and obtain vector heights for achieving the desired performance. Due to the size of the VCSEL source, the rays emitted by the VCSEL still have a small divergence angle after passing through the aspherical collimation. The effect of the divergence angle on the uniformity of the irradiation distribution on the target surface is investigated. It is shown that the uniformity is reduced to 85% when the residual divergence angle reaches 3°. A VCSEL array consists of several individual VCSEL modules equipped with a freeform lens. An evaluation function is constructed that guarantees simultaneous control of the uniformity and efficiency of the irradiance distribution of the VCSEL module array. The optimal spacing between these modules is obtained using the evaluation function by employing the Antlion optimization algorithm. ?The optimal VCSEL array enables the generation of a uniform irradiance distribution on the target plane with high optical efficiency.
A freeform lens is specifically designed for a single VCSEL light source, featuring an aspherical front surface and a freeform back surface. The emitted beam from the VCSEL light source, with a waist of 0.1 mm and a divergence angle of 8°, is efficiently transformed into a square uniform spot of 10 mm×10 mm on the target plane, achieving an impressive irradiation uniformity of 93.9%. The second freeform lens is specifically designed for a VCSEL source with an emitting area of 1 mm×1 mm; however, it only achieves an irradiation uniformity of 53.5% on the target plane. Each VCSEL combined with the second freeform lens forms what we refer to as a VCSEL module. The optimal 3×3 array of modules is generated by employing the antlion optimization algorithm to determine the optimal spacing between VCSEL modules. With the implementation of an optimized VCSEL module array, we have successfully achieved a remarkable enhancement in irradiation uniformity, reaching up to 85.2% on the larger target plane of 30 mm×30 mm. Moreover, our study has demonstrated an impressive overall light efficiency level of 91.8% for the VCSEL array.
The relationship between the sags of the freeform surface and the normal vector of each sampling point is transformed into a Poisson equation in this paper, and the sags of the freeform surface are obtained using the discrete cosine transform (DCT) method. By employing this approach, a freeform lens can be designed to achieve uniform irradiance distribution on the target plane with a high level of uniformity reaching 93.9%. Furthermore, we address an optimization problem for VCSEL arrays by transforming it into an intensity homogenization problem through multiple image superposition. The analysis reveals that to achieve uniform irradiation distribution from VCSEL arrays, it is necessary to generate non-uniform irradiance distribution on edge regions by individual VCSEL modules. Based on nine VCSEL modules with the optimal spacing, we can achieve uniform irradiance distribution on the target plane with a uniformity of 85.2% and an optical efficiency of 91.8%. The forthcoming research will explore the impact of fabrication and assembly tolerances of the freeform lenses on irradiation uniformity.
.Compared with visible light systems, cooled infrared imaging optical systems have better application effects in terrible climatic conditions. Compared with uncooled infrared imaging optical systems, they have higher detection sensitivity, longer viewing distances, and more excellent image quality. Therefore, the cooled infrared imaging optical systems are widely used in many fields, such as aerospace and military applications. Cooled infrared imaging optical systems with long focal lengths and large apertures have the problems of long barrel lengths, large volume, and high cost. To solve these problems and achieve a cold shield efficiency of 100%, the design of the catadioptric optical system is generally adopted, such as the Cassegrain-based catadioptric optical system. As sufficient theoretical guidance for determining the initial structure of such systems is lacking, we propose a method for optimal values of key parameters. We design a catadioptric cooled mid-wave infrared imaging optical system based on Cassegrain, which provides important theoretical guidance for the determination of the initial structure of this kind of system.
We derive the calculation formulas which are expressed by three key parameters: the shading coefficient α, magnification of the second mirror of Cassegrain βsec, and the vertical magnification of relay mirror βrelay, including the initial structure parameters of the optical system, the T value of system length, the primary spherical aberration, and the primary coma aberration of the Cassegrain system. The variation of the difficulty in correcting aberration and compactness of the system with α, βsec, and βrelay are analyzed through derived calculation formulas. Based on the contradictory relationship between the difficulty in correcting aberration and compactness of the system, the optimal value method of key parameters is proposed. The initial structure of the optical system is determined by the optimal value method, and the initial structure is further optimized through ZEMAX. A catadioptric cooled mid-wave infrared imaging optical system is designed, of which focal length is -600 mm and F number is 2. Finally, we finish the tolerance analysis on the optical system using the Mont Carlo statistical analysis method. The correctness of the theory and the machinability of the optical system are proved.
Combined with the derived calculation formulas, the T value of the optical system, the primary spherical aberration, and the primary coma of Cassegrain, the variation curves of SⅠ, SⅡ, and T value with α, βsec, and βrelay are given (Figs. 4-6). We also analyze the change rules of the system compactness and difficulty in correcting aberration with α, βsec, and βrelay. Based on the contradictory relationship between the difficulty in correcting aberration and compactness of the system, we propose the optimal value method of key parameters. The value of α should be as small as possible to ensure sufficient light intake and compactness of system structure and the value of βsec should be as large as possible to reduce the difficulty in correcting aberrations. Considering the contradictory relationship between the difficulty of correcting aberrations and the compactness of the system, the value of βrelay should not be too large or too small. Based on the optimal value method, three key parameters are determined as α=0.3, βsec=-3, and βrelay=-0.5. The initial structure of Cassegrain is determined through the value of α, βsec, and βrelay and optimized slightly through ZEMAX. The design results show that the initial structure of Cassegrain determined according to the optimal value method only needs simple optimization to obtain better image quality (Fig. 7). The initial structure of the optical system is formed by connecting the relay mirror group and the small aberration Cassegrain (Fig. 8) and optimized further. We obtain the catadioptric cooled mid-wave infrared imaging optical system with a long focal length and a large aperture, which is composed of Cassegrain and a relay mirror group with 6 lenses (Fig. 9). The optical system is compact in structure with a total length of 428 mm. Compared with the initial structure, the value of βrelay decreases, which proves that the length of the barrel can be reduced by reducing the value of βrelay. Although the aberration of Cassegrain increases significantly, the residual aberration can be fully compensated by the relay mirror group. At 33 lp/mm, the modulation transfer function (MTF) value of each field of view is greater than 0.4 (Fig. 10), and the imaging quality of the optical system is ideal. The results of tolerance analysis of the system by Monte Carlo statistical analysis show that more than 98% of the samples have MTF values greater than 0.2 and more than 90% have values greater than 0.3. The imaging quality of the optical system meets the requirements and this system is machinable.
Aiming at the design of a catadioptric optical system based on the Cassegrain, we propose an optimal value method of key parameters. The method provides theoretical guidance for the selection of key parameters when determining the initial structure of this kind of optical system and solves the problems of long structure and correcting aberration hard caused by the improper value of key parameters. The initial structure of Cassegrain is slightly optimized by ZEMAX. The results show that the system obtained by this method can meet the design requirements of compactness and reduce the difficulty of aberration correction. After optimizing the initial structure, we design a catadioptric cooled mid-wave infrared imaging optical system with a long focal length and a large aperture, whose structure is compact with a total length of 428 mm. The MTF value of each field of view is greater than 0.4 at the Nyquist frequency, and the root mean square (RMS) radius of each field of view is less than 4 μm, indicating that the imaging quality of the optical system is ideal. The results of tolerance analysis of the system by Monte Carlo statistical analysis show that more than 98% of the samples have MTF values greater than 0.2 and more than 90% have values greater than 0.3. Therefore, the imaging quality of the optical system meets the requirements and this system is machinable. The design results show that when designing a catadioptric optical system based on Cassegrain, the initial structure of the system can be determined by the optimal value of key parameters that we proposed, and the optical system with ideal image quality and compact structure can be obtained by conventional optimization.
.The development and utilization of new energy sources has always been an important human research field. As a green and renewable energy source, solar energy provides an effective way to alleviate the energy crisis. To date, many methods have been proposed for converting solar radiation into other forms of energy and applications, such as photovoltaics and solar cells, light and heat generators, thermoelectric power generation and solar steam power generation, seawater desalination, and photochemical and photocatalytic reactions. Notably, efficient solar energy capture is the key to realizing these applications. Therefore, the ultimate goal of investigating solar absorbers is to completely absorb solar radiation over the entire spectral range and to employ as little photosensitive material as possible. In the past few years, several methods for efficient absorption of solar radiation have been investigated. For example, black paint is widely adopted but exhibits high absorptivity only in ultraviolet and visible wavelengths, which wastes about 28% of solar energy. However, with technological advancement, metamaterials open up many new ways to manipulate electromagnetic waves, and they have many unique optical properties and have been shown to control the polarization state, amplitude, and phase of electromagnetic waves. Changing the light amplitude is a way to control light absorption. Therefore, it is important to study the perfect absorbers for solar energy based on metamaterials.
We design a solar absorber with a multi-layer hollow disk stacked structure based on a metal-dielectric-metal (MIM) resonator structure utilizing GaAs and amorphous GST (A-GST). Additionally, the designed solar absorber is simulated and theoretically analyzed using the finite difference method in time domain (FDTD) and data analysis software Matlab. First, the effect of the structural layer number and the difference in dielectric material per layer (using GaAs or A-GST) on the absorption is analyzed, and the structural parameters are optimized for achieving high absorptivity and broad operating bandwidth. Second, the phase parameters, effective impedance, and electromagnetic field strength and vector distributions at the four absorption peaks are analyzed to investigate the physical absorption mechanism. Then, the oblique incidence response from 0° to 50° is also analyzed to further explore the practicality of the absorber. Finally, the structure’s ability to absorb and convert solar energy is evaluated by calculating the solar spectrum-weighted absorption efficiency and effective thermal emissivity.
The results of the study show that both GaAs and amorphous state GST materials are extremely helpful in the design of solar absorbers (Fig. 6). The structure shows an average absorptivity of 97.48% in the wavelength range of 0.3-2.5 μm and a solar spectrum-weighted absorption efficiency of 98.02%. Meanwhile, the average absorptivity is 96.95% over the entire operating band from 0.3 μm to 4 μm, and the solar spectrum-weighted absorption efficiency is 97.54% (Fig. 6). The bandwidth is 2.37 μm for absorptivity greater than 95% and 3.57 μm for absorptivity greater than 90% (Fig. 3). The symmetry of the structure itself gives it excellent polarization-independent properties (Fig. 1), which is very favorable for solar absorption. Additionally, in the wavelength range of 0.3-2.5 μm, the structure also exhibits a stable response to changes in the incidence angle (Fig. 7). The designed solar absorber characterized by ultra-broadband and high absorptivity can provide tremendous advantages in absorption bandwidth and absorption efficiency over previously reported results (Table 2), and also greatly reduce the complexity and cost of fabrication due to the simplicity of the structure.
We propose a high-absorptivity ultra-broadband solar absorber based on a three-layer MIM strong resonator structure. The dielectric material for each layer of the MIM structure is analyzed with the effect of GaAs or A-GST on the absorptivity discussed. The results of the study show that both materials are highly applicable in the design of solar absorbers. The symmetry of the structure itself gives it excellent polarization-independent properties, which are very favorable for solar absorption. The designed solar absorber is characterized by ultra-broadband and high absorptivity with a simple structure, which greatly reduces the complexity and cost of fabrication. Therefore, it has potential applications in solar energy collection and conversion, photovoltaic devices, and thermal emitter devices.
.In the realm of nanophotonics, the discovery of a geometric phase solely dependent on the rotation angle of metasurfaces has catalyzed a flurry of research activity. The breakthrough has facilitated the development of avant-garde scientific and technological applications, such as metasurface microscopy and compact spectrometers. However, a pivotal challenge lies in the inherent conjugate relationship between the geometric phases of different chiral circularly polarized lights. The relationship manifests as phase values that are equal in magnitude but opposite in sign, thus precluding independent and unrestricted manipulation of phase profiles for each chiral polarization state. Addressing the limitation requires transcending traditional paradigms of geometric phase control. Recent advancements propose a suite of innovative control methodologies, integrating phase mechanisms that are independent of structural rotation. These include the resonance phase, transmission phase, and roundabout phase. The paradigm shift paves the way for a burgeoning research field focusing on multi-degree-of-freedom light field control. Our core aspiration is to achieve independent phase control for each circularly polarized light and ensure efficient coupling of these controlled light fields with on-chip photonic structures. By tackling these challenges, we aim to unlock new dimensions in light manipulation at the nanoscale, potentially revolutionizing applications in optical computing, advanced imaging, and beyond.
Based on the Jones matrix of the unit structure, our analysis elucidates the phenomenon of spin locking in the photonic Berry (PB) phase. This is attributed to the conjugate relationship between the PB phases carried by cross-polarized circularly polarized light, which is pivotal in manipulating light phase properties at the nanoscale. A key factor in the efficient coupling of the on-chip light field is the unique behavior of polarization coefficients. Specifically, we observe that the cross-polarization coefficient is effectively zero, while the co-polarization coefficient exhibits an inverse sign. These properties are instrumental in directing the light field’s behavior. Furthermore, our metasurface design leverages the phase gradient at the interface to match the wave vector of surface plasmons. The approach facilitates efficient coupling of the on-chip light field, a critical factor in advanced photonic applications. Meanwhile, we introduce a novel strategy to break the PB phase conjugation relationship inherent in cross-polarized circularly polarized light. By integrating a chirality-independent resonance phase with the PB phase, we can exert distinct phase controls over the two circularly polarized lights. The innovation marks a significant advancement in phase manipulation techniques. Additionally, our structural design adheres to the mirror symmetry principles, which ensures that the cross-polarization coefficient remains zero, an essential condition for our intended phase control. By meticulously selecting parameters from our structure library, we tailor the co-polarization coefficients to differ by a π phase. The precision engineering is the key to yielding our desired light manipulation outcomes at the nanoscale.
We report a significant advancement in the efficient coupling of on-chip light fields. Our approach enables the propagation mode of electromagnetic waves, which typically traverse in free space, to couple with on-chip surface plasmons. The coupling is achieved with remarkable efficiency, reaching up to 80% within 60 to 100 GHz frequency band. The high efficiency represents a noteworthy result in on-chip photonic systems, potentially paving the way for more compact and efficient photonic devices. Furthermore, our findings include groundbreaking development in the PB phase manipulation. We have successfully “unlocked” the spin of the PB phase, a significant stride in light manipulation at the nanoscale which allows for the directional propagation of left-hand and right-hand spins. Notably, they propagate to the left and right sides of the coupling structure respectively. The left-hand spins culminate as a Bessel beam channeling 37% of the energy, while the right-hand spins form a focused beam carrying 41% of the energy.
We present a novel coupler design that capitalizes on the dual degrees of freedom offered by the resonance phase and the geometric phase. A key innovation of our design is the precise setting of the turning and opening angles of each unit cell. The meticulous configuration tackles a fundamental challenge in wavefront shaping, or the issue of varying circular shapes due to the geometric phase in dealing with circularly polarized light of different chiralities. Our approach effectively overcomes the limitations imposed by the conjugate phase under polarized light excitation. The advancement enables the wavefronts at both ends of the spectrum to be shaped independently, allowing for unprecedented control between different chiral incident polarized lights. By leveraging the methodology, we have successfully designed a dual-function wavefront-controlled coupler. The device exhibits remarkable capabilities in simultaneously focusing and generating Bessel beams. The multifunctionality is a significant stride forward in the wavefront manipulation field. Additionally, the developed coupling device is characterized by compact size and multifunctional nature. These attributes make it a promising candidate for functional design in integrated photonic integration. Finally, our study not only puts forward a practical solution to a complex challenge in photonics but also opens new avenues for the advancement of integrated photonic devices. The broad potential applications of the technology range from optical computing to advanced imaging systems, heralding a new era in integrated photonics.
.The dual-parameter detection of temperature and refractive index (RI) plays a crucial role in various fields, which provides comprehensive information for facilitating real-time monitoring and control of diverse processes, thereby enhancing efficiency, quality, and reliability, particularly in such areas as medical diagnosis, industrial manufacturing, and food safety. Researchers have explored the implementation of dual-parameter sensing for temperature and RI based on the Mach-Zehnder interferometer (MZI) principle, where multiple optical fibers are fused. Some have proposed the integration of MZI with tilted fiber Bragg gratings (TFBGs) for temperature and RI dual-parameter sensing. However, the relatively complex manufacturing and detection processes involved in these sensors pose challenges to practical production applications. Photonic crystal fiber (PCF) sensors feature real-time detection and strong interference resistance. By adjusting the periodic arrangement of air holes in the PCF, the RI distribution across the fiber’s cross-section can be modified to influence the fiber’s transmission characteristics. Different sensing functionalities can be achieved by filling the PCF with various advanced liquid materials. The surface plasmon resonance (SPR) phenomenon significantly enhances the sensitivity and detection range of PCF, among other sensing performances. We design a double-D-type SPR-PCF filled with liquid crystal E7 to enable dual-parameter sensing for temperature and RI. The double-D-type structure is designed to extend and further enhance the characteristics of the D-type structure, with a larger proportion of the fiber core close to the external environment for improving sensing performance.
The internal air holes of the PCF are arranged in a hexagonal pattern. The central large hole is filled with the temperature sensitive material liquid crystal E7 to form the fiber core. The upper and lower sides of the PCF are polished, and gold films deposited on the large open loops on both sides enable SPR external sensing. Selective deposition of gold films on the second layer of air holes achieves SPR internal sensing, further enhancing the SPR effect and sensing performance. In the PCF, when the optical wave enters the metal film surface and forms the cladding material, total internal reflection occurs if the incident angle is greater than the critical angle. Along the direction parallel to the critical interface, evanescent waves are generated. When the real part of the surface plasmon polariton (SPP) wave propagating along the interface of the metal and analyte matches the phase-matching condition, the SPR phenomenon occurs. The energy of the incident light is absorbed by the free electrons on the metal surface, leading to a sharp intensity decrease in the reflected light, and a pronounced loss peak appears at the resonance wavelength. The loss peak generated by SPR is highly sensitive to changes in the external environment. Variations in the fiber core and cladding materials, metal film, and analyte can all cause the loss peak to shift. The shift detection in the loss peak allows for the RI and temperature measurement.
When the RI of the analyte is set to 1.5, and the temperature increases from 15 ℃ to 50 ℃, the relationship between the core mode loss and operating wavelength is calculated (Fig. 4). With the temperature elevation, the loss peak significantly increases, corresponding to a red shift in the resonance wavelength of the loss peak. The ordinary and extraordinary refractive indexes of the liquid crystal are determined by the temperature coefficients, making the crystal a nonlinear thermosensitive material. At temperatures of 45-50 ℃, the refractive index undergoes substantial changes, leading to significant movement of the resonance wavelength. This overall increase in temperature detection sensitivity comes at the cost of reduced linearity. Sensitivity segmentation facilitates fitting sensitivity curves to the actual detection range, allowing for a more accurate reflection of the sensor’s performance at specific temperatures. Second-order polynomial fitting from 15-50 ℃ and linear fitting from 25-45 ℃ are separately performed to obtain corresponding wavelength sensitivity and amplitude sensitivity (Fig. 5). The sensor exhibits higher temperature sensitivity within the detection range and exceptionally high sensitivity between 45 ℃ and 50 ℃, thus providing practical significance by enabling the fitting of required curves within the actual detection range. By setting the temperature to T=25 ℃, the relationship between core mode loss and operating wavelength is calculated when the RI of the analyte varies from 1.48 to 1.55 (Fig. 6) to obtain corresponding wavelength sensitivity and amplitude sensitivity (Fig. 7). By adjusting the periodic arrangement of air holes in the photonic crystal fiber (PCF), the RI distribution of the fiber cross-section can be altered, thereby affecting the fiber’s transmission performance. The effects of varying the central liquid crystal hole diameter d0, small air hole diameter d1, and regular air hole diameter d2 on the fiber’s loss spectrum and resonance wavelength are studied separately (Figs. 8-10) to determine the optimal parameters. The sensor’s sensitivity is further enhanced after parameter adjustment, with good linearity (Fig. 11). Following parameter adjustments, the sensor demonstrates a maximum temperature sensitivity of 13.79 nm/℃ within the temperature range of 15-50 ℃, with a corresponding linear fitting constant of 0.99066. In the temperature range of 25-45 ℃, the sensor exhibits good linearity, with an average temperature sensitivity of 7.6 nm/℃ and a corresponding linear fitting constant of 0.98539. For analyte RI in the range of 1.48-1.55, the average RI sensitivity is 2904.76 nm/RIU, with a corresponding linear fitting constant of 0.98179.
We present a novel double-D-type liquid crystal filled SPR-PCF sensor enabling simultaneous detection of temperature and RI. The temperature and RI are measured by filling the temperature sensitive material liquid crystal E7 in the central large air hole and the RI analysis solution with large open loops contact on both sides. By varying the environmental temperature and analyte RI, the transmission characteristics of the optical fiber are systematically investigated by adopting the finite element method. The effects of the central liquid crystal hole diameter d0, small air hole diameter d1, and regular air hole diameter d2 on sensor performance are studied. Following parameter adjustments, the sensor demonstrates a maximum temperature sensitivity of 13.79 nm/℃ within the temperature range of 15-50 ℃, with a corresponding linear fitting constant of 0.99066. In the temperature range of 25-45 ℃, the sensor exhibits good linearity, with an average temperature sensitivity of 7.6 nm/℃ and a corresponding linear fitting constant of 0.98539. For analyte RI in the range of 1.48-1.55, the average RI sensitivity is 2904.7 nm/RIU, with a corresponding linear fitting constant of 0.98179. The designed double-D-type SPR-PCF sensor optimizes beyond the traditional D-type PCF and possesses enhanced sensitivity. Meanwhile, its capability for dual-parameter detection of environmental temperature and RI analyte brings the applicability to various fields such as environmental monitoring, healthcare, and biochemistry.
.Mercury (Hg) is a volatile and highly toxic heavy metal element. Under the action of atmospheric deposition and ocean circulation, mercury ions (Hg2+) easily enter fresh water or the ocean, causing harm to the ecological environment and human body. It is of great significance to develop a high-sensitivity mercury ion detection system for environmental protection and human health. In recent years, various methods for the detection of mercury ions have been developed, including atomic absorption spectrometry, fluorescent probe method, solid-phase extraction, and inductively coupled plasma optical emission spectrometry techniques. However, these methods often require sophisticated instruments, specialized operators, and time-consuming procedures. In addition, some of these methods are prone to interference from other metal ions, which can lead to false positive results. DNA biosensor research has taken center stage in the past decade, marking a significant advancement in heavy metal ion detection. Mercury ions selectively bind to two thymidines (T). During the hybridization of DNA molecules, mercury ions can change the structure of some double helix bases of DNA and form a T—Hg2+—T mismatch structure, which is more stable than the base pair. The specific detection of mercury ions as a heavy metal can be realized by using the T—T mismatch mechanism of DNA molecules. However, it still faces the problems of low sensitivity, long response time, and low accuracy. Therefore, the demand for rapid and precise mercury ion detection in clinical settings and environmental pollutant monitoring necessitates innovative approaches.
In order to achieve specific, ultra-trace, and in situ mercury ion detection, a tilted fiber Bragg grating-surface plasmon resonance (TFBG-SPR) biosensor based on aptamer and magnetic nanoparticles (MNPs) enhancement is designed. TFBG-SPR has a rich mode field distribution and narrow linewidth, which enables the high-performance detection of physical changes and chemical reactions without being affected by the external environment. To accurately detect mercury ions, a streptavidin (SA) aptamer with T bases is specially designed to form T—Hg2+—T pairing with mercury ions, and SA-coated MNPs are used as signal amplification labels to improve ion detection performance. In the experiment, a 50 nm gold film is sputtered on the TFBG cladding region to excite SPR, and the end face of the probe is plated with a gold film to form a reflective sensing structure. Then, an aptamer with mismatched T bases is used to modify the TFBG-SPR grating surface for specific mercury ion recognition. Finally, the mercury ions interacted with T bases to activate the aptamer sequence and connect the MNPs. The refractive index of the sensor surface would change, and the amplitude of the TFBG-SPR cladding mode responded to the perturbation of the refractive index.
In this paper, a TFBG-SPR-aptamer-MNPs biosensor is proposed and used for mercury ion detection. As mercury ions interact with T—T bases in the aptamer, and magnetic nanoparticles bond, the sensor surface refractive index increases, and the SPR envelope redshifts. The plasma fiber envelope amplitude changes. According to the unique characteristics of the TFBG-SPR cladding modes (opposite amplitude changes occur on both sides of the SPR cladding mode), the final amplitude variation can be measured by differential amplitude demodulation, realizing high-sensitivity, high-resolution, and high-Q refractive index change measurements. In the experiments, the amplitude variation is found to be 3.633 dB for a Hg2+ concentration of 10 μmol/L (Fig. 3), and the detection range is 1 pmol/L-10 μmol/L. In Fig. 4, a linear relationship between the amplitude response and logarithmic value of the Hg2+ concentration is obtained. Thus, the limit of detection (LOD) of mercury ion is estimated to be 0.5 pmol/L based on the mean of the blank background signal plus three times the standard deviation of the noise. In addition, the sensor exhibits good specificity (Fig. 5) and reproducibility (Fig. 6), and the application potential of environmental pollutant monitoring and point-of-care testing technology is confirmed by sample recovery measurements (Table 1). We obtain a recovery of 92.48%-105.38% with a relative standard deviation of 2.56%-10.20% for the actual samples.
In summary, we demonstrate a unique biosensor using aptamer-SA-MNPs structure based on TFBG-SPR for ultra-trace mercury ion detection. Aptamer-based sensors with high affinity and strong specificity solve the key problems of stability and selectivity of biosensors in mercury ion detection. TFBG-SPR sensors operate in the common communication wavelength band and exhibit minimal temperature cross-sensitivity and a high refractive index sensitivity. By combining the high amplitude sensitivity of TFBG-SPR, the specific recognition of aptamer-mercury ion, and the effective signal amplification of MNPs, our sensor has achieved an LOD down to 0.5 pmol/L and a detection range of 10-12-10-5 mol/L. This sensor also exhibits good selectivity for Hg2+ over other divalent metal ions. Furthermore, this biosensor presents good recovery and is suitable for mercury ion detection in tap water and rabbit serum. The proposed sensor can be applied in environmental monitoring and clinical diagnosis, including tumor microenvironment and point-of-care testing technology.
.The image sensor is a device that converts optical signals into electrical signals. It is often employed in digital image information acquisition and plays an important role in daily production and life. In the 1960s, the complementary metal oxide semiconductor (CMOS) image sensor (CIS) was discovered. In 1993, the CMOS active pixel image sensor (APS) with intra-pixel charge transfer was proposed. The development of the photosensitive pixel unit structure has greatly improved the overall performance of CIS. Meanwhile, its superior performance has gradually replaced the charge coupled device (CCD) image sensor and quickly become a research hotspot. However, in some specific application fields, such as aerospace, medicine, and nuclear energy, CMOS image sensors are not immune to the influence of radiation. Radiation particles often have strong penetration ability and can interact with the electronic devices inside the CIS. For a single electronic device, radiation will change the electrical parameters of the device, and for the entire image sensor, radiation will cause a series of problems such as increased dark current and noise, thus greatly reducing the image quality of the image sensor, and bringing great uncertainty to scientific research, medical diagnosis, and industrial production. Thus, we report the experiment and analysis of proton irradiation on the damage effect of the 6T CIS transmission gate. The transmission gate is introduced in the form of transmission transistors and irradiated with a proton with energy of 60 MeV. The effect of proton irradiation on the electrical performance of the transmission transistor can be carried out for the local structure of the pixel circuit. By characterizing the electrical parameters, we reveal the degradation and annealing mechanism of the 6T CIS transmission transistor induced by high-energy proton irradiation at different fluences, providing data support for the simulation and radiation-resistant reinforcement of the CIS transmission transistor.
First, in the layout design of the single tube of the transmission gate, the four ends of the transmission gate (TG) are connected to four pads respectively and are introduced to the pins by lead bonding, with the electrical performance of the transmission transistor tested. Then, in Xi'an 200 MeV Proton Application Facility (XiPAF), the proton irradiation energy is 60 MeV, and the fluences are 1×1010, 1×1011, and 1×1012 cm-2, respectively. During the irradiation, the sample is fixed on the anti-static sponge without voltage bias. After irradiation, the electrical parameters of the single tube of the transmission gate are tested before and after irradiation on the B1500A semiconductor parameter analyzer to study the changes of the electrical sensitive parameters before and after irradiation and the annealing after irradiation.
The electrical performance of the transmission transistor is tested after irradiation. The experimental results show that, with the increasing proton flux, the forward bias degree of the threshold voltage is increasing [Fig. 3(a)], and Fig. 3(b) indicates that the decrease in saturation current is also increasing. As shown in Fig. 4(a), for the change value of the threshold voltage, the decrease in the threshold voltage change value is not very obvious from 0 to 24 h, and at 72 h, the threshold voltage change value decreases obviously. As shown in Fig. 4(b), the decrease in saturation current after irradiation reduces with the increasing annealing time. Combined with TCAD simulation analysis, proton irradiation will generate oxidation layer trap charge and interface state trap charge, degrading the single tube of the CIS transmission gate. As shown in Fig. 8(a), the change trend of the threshold voltage change value is similar to the test results, and Fig. 8(b) reveals that the decrease in saturation output current increases, which is similar to the test results.
After proton irradiation, the threshold voltage and saturation current of the single tube of the CIS transmission gate have significant changes, and the changes increase with the rising flux and then affect the normal function of the whole pixel. Then the 72 h annealing experiment is carried out, and the experiment shows that the electrical parameters of the single tube of the transmission gate caused by irradiation damage can be restored to a certain extent by room temperature annealing. The changes in threshold voltage and saturation current of the single tube of the transmission gate of CIS after irradiation are mainly caused by the trap charge of the oxide layer and the interface state. The amount of irradiation-induced trap charge increases with the rising irradiation fluence, among which the trap charge of the interface state increases continuously, and the forward drift of threshold voltage intensifies. This will make it difficult to form an inversion layer in the P-type base region at the Si/SiO2 interface, with decreased saturation current.
.Progress in imaging and display optical systems exerts significant influences on the development of science and technology. Imaging and display systems intrinsically utilize optical elements (geometric or phase elements) to modulate optical wavefronts and achieve expectational imaging relationships, system specifications, and structure requirements. As the representative elements of geometric and phase elements respectively, freeform optical elements (FOEs) and holographic optical elements (HOEs) have significant advantages in optical system design. FOEs possess high degrees of design freedom, which can greatly enhance the ability to modulate wavefronts and improve imaging performance. Additionally, freeform surfaces can correct the aberrations of optical systems with off-axis nonsymmetric structures. Meanwhile, HOEs can unconventionally deflect rays at large angles due to their unique ability to modulate optical wavefronts. They can dramatically reduce the weight and volume of optical systems due to the lightweight form factor, and realize better optical see-through experiences and full-color display due to unique selectivity and multiplex ability, achieving mass productions owing to relatively simple fabrication methods and low costs. Meanwhile, it is easy to fabricate HOEs with large sizes due to the unique fabrication methods. Considering the above-mentioned advantages, designers may design imaging and display optical systems that combine FOEs and HOEs, significantly improving the degrees of design freedom and the ability to correct aberrations. Additionally, we can achieve advanced system specifications, excellent system performance, compact and lightweight system forms, and unconventional system structures with off-axis nonsymmetry, with further development of optical systems promoted. It is important to summarize the existing design methods of imaging and display systems combining FOEs and HOEs, analyze the problems restricting their further development, and predict the development trends. Meanwhile, it is essential to summarize the existing designs and applications of these systems to better guide and promote the development.
We describe the basic principles, ray-tracing models, advantages, and applications of FOEs and HOEs respectively, summarize the system design methods, review the designs and applications of these systems, and analyze current restrictions and future development trends. The design of these systems can be divided into three types. 1) FOEs and HOEs are simultaneously utilized to correct the aberrations of optical systems. 2) The freeform surface is adopted as the substrate shape of HOEs. 3) During HOE fabrication, FOEs are introduced to modulate the recording waves of HOEs. In practical optical system designs, the design can be a combination of the above three ways. The first way directly builds ray-tracing models of freeform optics and HOEs in the optical system design and then adopts the optimization strategy to achieve expectational requirements. The second way coats the holographic recording medium on the freeform substrate to yield HOEs with freeform substrates. The third way bridges the numerical relationship between freeform optics and recording waves of HOEs to fabricate HOEs with unconventional profiles of holographic phase function or grating vector. The methods for defining HOEs based on ray tracing are described in detail, including the phase functions (direction cosines) of the recording waves, holographic phase function, and holographic grating vector, which guides the basic combined design schemes. We review the ways of fabricating HOEs including the whole-area exposing and sub-area exposing (holographic printing) to provide references for combined design fabrication. The calculation methods of starting points of optical systems based on HOEs are summarized in detail, including point-by-point construction and iteration methods, confocal methods, and simultaneous multiple surface (SMS) methods, which guide the design of the optical system combining FOEs and HOEs. The designs and applications of these systems are summarized based on the classifications of HOEs, including augmented reality (AR) near-eye display systems, head-up display (HUD) systems, and HOE-lens imaging systems. Additionally, combined designs of freeform optics and other types of phase elements are also presented, such as liquid crystal polarization hologram (LCPH) based on freeform exposure, and metasurfaces with freeform substrate, which has certain guidance for the combined design of FOEs and HOEs.
Studies on the system design combining FOEs and HOEs make significant progress in the basic principles, design frameworks, and fabrication methods, which has been employed for developing imaging and display systems with high performance, novel structure, and lightweight form factor. There are also some problems and challenges for the research on the system design combining FOEs and HOEs. They include how to fabricate HOEs with freeform substrates by innovative coating technologies of the holographic recording medium, how to correct chromatic aberrations in the imaging and display system using HOEs, how to reduce the nonuniformity of diffraction efficiency and stray light of systems combining FOEs and HOEs, and how to conduct tolerance analysis of such systems. In summary, the research on the design of imaging and display systems combining FOEs and HOEs will promote the development of next-generation high-performance and compact optical systems.
.Imaging spectrometers based on acousto-optic tunable filters (AOTFs) are widely recognized for their rapid tuning, reliability, repeatability, and ability to change spectral channels with ease. These instruments have been extensively studied in space remote sensing and reconnaissance. Meanwhile, the spectrometers should be capable of functioning accurately over a broad temperature range to deliver precise spectral information across various operating environments. However, the spectral data accuracy is compromised by ambient temperature fluctuations, which affects the AOTF’s spectral tuning and the spectrometer’s response to radiation. The tuning relationship shift is predominantly the result of refractive index changes in the acousto-optic crystal and the velocity of acoustic waves as temperature varies, altering the acousto-optic interaction within the crystal. Similarly, the spectrometer’s radiation response drifts due to alterations in the AOTF’s diffraction efficiency and temperature-dependent changes in the performance of both electronic and optical components. Although previous studies have taken account of the temperature drift in radiation response during the radiometric calibration, it is necessary to first ensure the spectral wavelength stability in the output images, and otherwise, radiometric calibration cannot be achieved. Therefore, implementing temperature corrections during spectral calibration is essential to prevent wavelength deviations in the output images during temperature shifts, which would result in erroneous radiometric calibration.
We propose a spectral and radiometric calibration method for correcting temperature effects. Firstly, an AOTF tuning model that incorporates a temperature variable is built. Within this model, the relationship between the drive frequency and the optical wavelength, acoustic wave velocity, refractive index, angle of incidence, and acoustic cut angle is derived. The effect of acoustic wave velocity on the drive frequency is considered independently, and a temperature increase brings about rising acoustic wave velocity, leading to a higher drive frequency (Fig. 2). Then the effect of the refractive index on the drive frequency is considered separately, and a temperature rise leads to increasing refractive index, which also results in a higher drive frequency. Meanwhile, both crystal physical parameters are considered concerning their influence on the drive frequency and compared with the actual measured frequency. At different temperatures, the response of the AOTF’s driving frequency at different wavelengths is measured. The central driving frequencies at various temperatures and wavelengths are extracted, and then a polynomial fitting is employed to deduce the tuning relationship between the central driving frequency, temperature, and optical wavelength. This allows for the correction of temperature-induced tuning drifts during the spectral calibration. During the radiometric calibration, the spectrometer is loaded with adjusted driving frequencies to ensure that the system response can track the required wavelengths at all temperatures. The system responses at different temperatures and wavelengths are collected to obtain the spectral radiometric calibration coefficients that include the temperature variable. By adopting interpolation methods, the spectral radiometric calibration coefficients at any temperature are obtained to realize temperature-corrected radiometric calibration (Fig. 3).
Multiple wavelengths within the range of 3.7 to 4.5 μm are selected to measure the frequency response of the spectral imaging system at various temperatures between -30 and 50 ℃ [Figs. 4(a) and 4(b)]. As the temperature increases, the central driving frequency shifts towards higher frequencies. For the spectral channel with a central wavelength of 4.0 μm, the central driving frequency is 20.05 MHz at a working temperature of -30 ℃, and 20.14 MHz at a working temperature of 50 ℃. It is evident that when there is an approximate temperature difference of 80 ℃ in the working conditions, the driving frequency needs an adjustment of 0.09 MHz to ensure the output wavelength stability. If a fixed driving frequency is applied at different temperatures, the central wavelength of the output from each spectral channel of the system drifts (Table 3), with the wavelength drifting by 0.0015-0.0025 μm per 10 ℃. After completing spectral calibration, the driving frequency accuracy at each wavelength is significantly improved [Fig. 4(d)], and the average driving frequency deviation at different temperatures is reduced (Table 4). The response of the spectral imaging system drifts with temperature, and the spectral data obtained at different temperatures will show variations with temperature. When the temperature rises from -20 to 30 ℃, the system response decreases and then the calculated spectral radiance decreases [Fig. 5(a)]. After radiometric calibration corrected for temperature, the spectral radiance accuracy improves at lower temperature ranges (Table 5).
To enhance the temperature stability of spectrometer data, we propose a method for correcting the temperature influence on spectral and radiometric calibration. Firstly, a tuning model of the AOTF incorporating temperature variables is built. We analyze the mechanism by which temperature variations affect the characteristics of AOTFs via altering the physical parameters of the crystal material, with the most significant effect of acoustic wave velocity. This model corrects the spectral drift caused by temperature in spectral calibration, achieving wavelength tracking during the variable-temperature radiometric calibration and ensuring wavelength stability in subsequent radiometric calibrations. Thereafter, the spectral radiometric calibration coefficients that include temperature variables are determined to complete the radiometric calibration. Relying on a laboratory setup, we construct a mid-wave infrared (3.7-4.5 μm) calibration verification system for AOTF spectral imaging temperature correction to validate the calibration method over the temperature range from -30 to 50 ℃. The results indicate that the average driving frequency deviation at a low temperature of -30 ℃ is reduced from 41.1 to 0.29 kHz, effectively suppressing the spectral radiance deviation from theoretical values.
.Structural color is produced by the interaction between light and micro-nano structures. Under the constantly unchanged refractive index and lattice of the material, the color will remain stable. Therefore, structural color is more stable than chemical dyes, and structural color devices play a crucial role in color printing, medical diagnosis, and information storage. In particular, dynamic structural color sensing devices surpass the limitations of static structural color devices, possessing superior flexibility. Consequently, they have more applications in such areas as anti-counterfeiting, dynamic full-color displays, and security encryption. In summary, research on dynamic and adjustable structural color sensing devices holds practical significance. Some researchers have yielded dynamic structural color display by employing periodic micro-nano structural devices, such as metal hole arrays and metal disk arrays, in combination with adjustable insulating materials based on electrochemical activity or temperature/humidity sensitivity. However, the complex fabrication process relying on surface plasmon resonance (SPR) morphology for such structural color devices is cumbersome and involves multiple processing techniques, making it impractical for production. We propose and implement a metal-insulator-metal (MIM) structure based on hydrogel films. By leveraging the humidity-sensitive properties of the insulating layer material and the Fabry-Perot (F-P) resonance phenomenon, we realize dynamic, rapid, and easily fabricated structural color humidity sensing.
In hydrogel-based metal-insulator-metal structures with dynamically tunable colors, an Ag-PVA-Ag structure is formed to create an asymmetric F-P cavity. When incident light shines on the structure’s surface, the thin silver film on top allows light to be both transmitted and reflected, while the 200 nm thick bottom silver film prevents the incident light from passing through. The middle insulating layer of PVA forms the cavity of the F-P cavity and controls light waves through interference by combining with the upper and lower silver films. The upper and lower silver films are prepared via magnetron sputtering, while the middle insulating layer is created by spin-coating a PVA aqueous solution. The thickness of the PVA film layer and the top silver film affects the resonance wavelength, half-width at half-maximum (FWHM), and resonant intensity of the F-P resonance dip. By adjusting the thickness of the PVA film layer and the top silver film, light of specific frequencies can be localized within the metal microstructure, while that of other frequencies is reflected to bring different colors displayed by the structure and achieve a wide range of static color displays. As the relative humidity in the external environment increases, the refractive index of the PVA film layer decreases, with rising thickness. This phenomenon alters the cavity length of the F-P cavity, thus changing the optical path of the incident light within the cavity and realizing humidity-sensitive dynamic color displays.
When the bottom silver film thickness is 200 nm and the top silver film thickness is 10 nm, we study the influence of PVA film thickness on resonance wavelength and color (Fig. 6). As the PVA film thickness increases, the resonance wavelength gradually shifts towards longer wavelengths (red shift), while the FWHM remains almost unchanged. The coordinates in the CIE 1931 chromaticity diagram rotate clockwise. Then, when the bottom silver film thickness is 200 nm and the PVA film thickness is 110 nm, we investigate the influence of top silver film thickness on resonance wavelength and color (Fig. 7). Increasing the top silver film thickness causes a gradual shift towards shorter wavelengths (blue shift), with a narrowing FWHM. The resonance intensity first increases and then decreases, and the coordinates in the CIE 1931 chromaticity diagram slowly move clockwise. Under another scenario of a bottom silver film thickness of 200 nm, a top silver film thickness of 10 nm, and PVA film thicknesses of 90 nm and 110 nm, we examine the variation in resonance wavelength and color as relative humidity increases from 3.0% to 84.2% (Figs. 10 and 11). Due to the self-driven swelling phenomenon of the PVA film, both samples experience a red shift in resonance wavelength. The coordinates in the CIE 1931 chromaticity diagram rotate clockwise by one lap. The average relative humidity sensitivity is 0.998 nm/% and 0.862 nm/% respectively.
We present and investigate hydrogel-based MIM structures with dynamically tunable colors, and employ the unique property of the PVA film which undergoes a self-driven swelling/contraction phenomenon in different relative humidity environments and has a refractive index decreasing with the rising relative humidity. Combined with the Fabry-Perot (F-P) resonance phenomenon, the F-P resonance wavelength can shift with relative humidity changes. This enables the absorption or reflection of light at different wavelengths in varying relative humidity environments, leading to corresponding changes in the structural color. Experimental results demonstrate that by controlling the thickness of the PVA film and the top Ag film in the MIM structure, the displayed color can be influenced. Furthermore, as the relative humidity gradually increases from 3.0% to 84.2%, the resonance wavelength of the two samples shifts from 474 nm to 555 nm and from 526 nm to 596 nm respectively. The average relative humidity sensitivities are 0.998 nm/% and 0.862 nm/%. Additionally, the chromaticity coordinates of the samples rotate clockwise, indicating a color change. Compared to existing structural color devices, the fabricated MIM structural humidity sensor can achieve dual-parameter sensing of relative humidity and color, with rapid response, simple fabrication, low cost, and small size. Considering the deformability, self-driven nature, and biocompatibility of hydrogels, hydrogel-based structural color devices can further be applied to complex dynamic nano-photonic coloration devices.
.Color reproduction plays a very important role in textile, printing, telemedicine, and other industries, but affected by the manufacturing process or color rendering mechanism of digital image acquisition equipment, color image transmission between digital devices often has color distortion. Meanwhile, once the distortion appears, the above-mentioned industries will suffer losses or even irreversible damage. During color image acquisition, the most commonly employed acquisition equipment is the digital camera, which is an important method to convert the color image collected by the digital camera into the image seen by the human eye (or the camera characteristic method). Although the existing nonlinear camera characterization methods have the best camera characterization performance at present, these methods have hue distortion. To retain the important properties of the hue-plane preserving and further improve the camera characterization performance, we propose a hue-subregion weighted constrained hue-plane preserving camera characterization (HPPCC-NWCM) method.
The proposed method improves weighted constrained hue-plane preserving camera characterization from the perspective of optimizing the hue-subregion. First, the camera response value RGBs and the colorimetric value XYZs of the training samples are synchronously preprocessed, with the hue angles calculated and hue subregions preliminarily divided. Then, by operating in the hue subregion, the minimum hue angle differences between each training sample and the samples in the hue subregion are employed as the weighted power function, and the pre-calculation camera characterization matrices (pre-calculation matrices) are calculated for each sample respectively. Additionally, the weighted constrained normalized camera characterization matrix in the hue subregion is obtained by weighted averaging of the pre-calculation matrices using the weighted power function. Combined with the characterization results of samples within the hue subregion and all samples, the number and position of the hue subregions are optimized, and those under the best performance are obtained. To verify the performance improvement of this method, we conduct simulation experiments. Firstly, the hue-subregion number selection experiment is carried out by combining three cameras and three groups of object reflectance datasets under the D65 light illuminant. Then, the two cameras from the previous experimental data are compared with existing methods for further experiments and the exposure independence of each method is verified by changing the exposure level. Finally, the SFU dataset is compared with the existing methods repeatedly with 42 cameras under three light illuminants.
Verified by many simulation experiments and real camera experiments, in the simulation experiment of selecting the hue-subregion number, the camera characterization performance of this method is generally enhanced with the increasing hue-subregion number (Fig. 7), tends to stabilize when the number is 6, and yields the best performance when the number is 9. The performance of the subregion number 2 is worse than that of 1, and the analysis is that the small subregion number results in poor universality and low specificity of the characterization matrix in the hue subregion, which affects the characterization performance of the camera. After comparing the simulation experiment with the existing methods, the performance of this method is about 10% to 20% higher than those of the existing hue-plane preserving camera characterization methods, and it is better than or close to the nonlinear method (Table 1). In the variable exposure experiment, the performance of each method is close to that of the fixed exposure experiment, and that of the linear method and the root-polynomial method is close, which can prove the exposure independence. While the polynomial method is obviously worse, exposure independence does not exist (Tables 1 and 2). In the simulation experiments of supplementary light illuminants and cameras, the comparison trend of the results is basically the same as that of the previous experiment, and this method performs better in the supplementary experiment. In addition to being better than the existing camera characterization methods, it can be better than or equal to the nonlinear methods in many environments (Table 3).
By optimizing the hue subregion to improve the weighted constrained hue-plane preserving camera characterization method, the number and position of the hue subregion are optimized to achieve a more accurate camera characterization transformation for different hue subregions. By adopting the theoretical derivation and experimental verification of camera characterization transformation, this method features exposure independence, excellent hue-plane preservation properties, and the combination of the stability of low-order methods and the accuracy of high-order methods. In simulation experiments, it can be better than the existing hue-plane preservation methods, and better than or close to other nonlinear methods. In multi-camera supplementary experiments, the 95 percentile error improvement shows that this method has strong robustness and practical significance.
.Computed tomography (CT) is a widely used technique for the reconstruction of the internal structure of three-dimensional (3D) objects. The technique can obtain information about the interior of an object under non-contact and non-destructive conditions. Therefore, it is widely used in the fields of industry, medicine, geology, and material science. In the electronics industry, computed laminography (CL) is often used to collect projection data, so as to perform quality inspection and failure analysis of integrated circuits, multilayer printed boards, and other plate-like electronic devices. However, CL scanning causes interlayer aliasing in the reconstructed image. The current preferred method is to convert the CL projection to an equivalent CT projection with large cone angles using a projection transformation method called CL reprojection (CLRP) and then reconstruct them using the Feldkamp-Davis-Kress (FDK) algorithm. However, the FDK algorithm suffers from gray-scale degradation and edge artifacts during the reconstruction of large cone angles, which affects the reconstruction quality. In this study, we propose a filter path transformation algorithm based on data rearrangement. This algorithm can reduce the gray-scale degradation and edge artifacts of the reconstructed image during large cone angle reconstruction. In addition, we apply the algorithm to the 3D reconstruction of plate-like objects by combining it with the projection transformation method. It is expected to improve the reconstruction effect when the CL projection is converted to an equivalent CT projection with large cone angles.
In this study, a filter path transformation-based reconstruction algorithm was proposed and applied to the reconstruction of the plate-like object. This was achieved by converting the projection of the plate-like object acquired by CL scanning to an equivalent CT projection with large cone angles by the CLRP method, and then the proposed algorithm was applied to realize the 3D reconstruction of the plate-like object. This algorithm utilized two parameters to rearrange the CT projection and adjusted the projection surface to change the longitudinal coordinates of the projected data for filter path transformation. In addition, we derived the method for calculating the parameters of this algorithm during reconstruction based on the projection transformation method. With the help of these parameters, it is possible to realize the 3D reconstruction of the plate-like object. In order to investigate the effect of filter paths on the reconstruction results, we used the 3D Shepp-Logan model for simulation and reconstruction. The reconstruction errors for different parameter combinations were counted, and the error surface was plotted for demonstration. Then, we compared the proposed algorithm with other algorithms for reconstruction to verify the effectiveness of the proposed algorithm. In addition, we designed a printed circuit board (PCB) model to verify the effectiveness of the proposed algorithm combined with the projection transformation method in plate-like object reconstruction. Finally, in order to investigate the feasibility of the proposed method in practical applications, we also used CL scanning equipment to reconstruct real PCB samples and objectively analyzed the efficiency and flaws of this algorithm.
In this paper, we investigate the effect of filter paths on the reconstruction results of the proposed algorithm. The experimental results indicate that when reconstruction is performed under conditions of unequal parameters, black or white artifacts then appear at both ends of the reconstructed image (Fig. 8). The error is minimized when the reconstruction is performed under the condition that the two parameters are equal, and the most desirable reconstruction results can be obtained (Fig. 10). This shows that the proposed algorithm can optimize the reconstruction by corresponding filter path reconstruction under the condition that the two parameters are equal. We reconstruct the proposed algorithm in comparison with FDK, P-FDK, and T-FDK algorithms under this filter path. The result is that the proposed algorithm has fewer edge artifacts than other algorithms when the cone angle increases, and the reconstruction results are more satisfactory (Figs. 11 and 13). The simulated reconstruction and comparison of the converted CT projection with large cone angles are carried out by using the PCB model to examine the viability of the proposed algorithm in the CL scanning reconstruction of plate-like objects. This is done based on the verification of the reconstruction effect of the proposed algorithm. The results show that compared with the FDK algorithm, the reconstruction effect of this algorithm has fewer artifacts (Fig. 16), and the quantitative evaluation index of the reconstructed image is more satisfactory (Table 3). In addition, the reconstruction results of PCB samples show that the reconstructed images of the proposed algorithm have fewer artifacts and sharper images (Fig. 18). These results demonstrate the feasibility of this algorithm in plate-like object reconstruction. Finally, as shown by the comparison of reconstruction time (Table 5), the proposed algorithm takes more time for reconstruction. Despite the decrease in efficiency, it has the advantage of being suitable for parallel accelerated computation and can reduce the reconstruction time to an acceptable range. Moreover, the effect of interpolation on the spatial resolution of the reconstruction of this algorithm can be solved by increasing the number of projections. Therefore, the proposed algorithm still has practical application value.
In this paper, we propose a filter path transformation-based reconstruction algorithm to improve the gray-scale degradation and edge artifacts during FDK reconstruction of large cone angles and enhance the quality of reconstructed images. The experimental results show that this algorithm can effectively reduce the edge artifacts under the condition of equal parameters. Compared to other algorithms, the proposed algorithm performs better in large cone angle reconstruction. In addition, the combination of this algorithm with the projection transformation method is applied to the 3D reconstruction of plate-like objects, and the reconstructed images are clearer and have smaller errors than images reconstructed by the FDK algorithm. Although the reconstruction speed of this algorithm is degraded, the efficiency can be improved by parallel acceleration in real engineering applications. Moreover, the degradation of the spatial resolution of the reconstructed images can be reduced by increasing the number of projections. Overall, the algorithm proposed in this paper can reduce the artifacts in FDK reconstruction with large cone angles, and it is simple and easy to implement, so it has practical applications in plate-like object reconstruction.
.