Influence of Scale Effect of Photodetector on Visibility Measurement Uncertainty

Xiao Shaorong, Liu Bohan, Shi Liufeng, and Huang Biao

Objective Visibility measurement is not only used for weather forecasting but also widely used in aviation, voyage, highway, military, and environmental monitoring. In the visibility observation, the instrument measurement replaces manual observation. Currently, the mature visibility measuring instruments have transmission-and scattering-types. Advantage of the transmission type is that it can detect atmospheric transmittance without any assumptions about the atmospheric conditions. Owing to the large sampling volume and high accuracy, the transmission visibility meter is widely used in airports. Since several years, the World Meteorological Organization has been conducting a tracking study on measurement errors of visibility meters distributed globally. The results showed that the main reason for measurement uncertainties is the incorrect alignment of the transmitter and receiver. Errors caused by this incorrect alignment can be attributed to spot drift, which is projected by the detection beam on the sensitive area of the detector. In this paper, the influences of photodiode on the uncertainty of the visibility measurement were reported, law of the influence of spot drift on transmission visibility measurement uncertainty was found, and strategies to suppress the effects of beam drift were presented.Methods The photodiode spectral response distribution equation was derived based on the quantum conversion efficiency. In transmission visibility meters, photodiodes with large sensitive areas are the most general choice because of their full energy utilization. However, the size of a sensitive area has its limitations; the spectral response on the sensitive area is not distributed uniformly because the edge of the sensitive area is the recombination center of photogenerated carriers, which implies that the output current of the photodiode is different when light spot with the same power beam drifts at different positions. An experimental visibility receiver setup for verification was constructed, and two photodiodes made by different manufacturers were selected. Their nominal sensitive surface area was 6.0 mm×6.0 mm, and diameter of the light spot projected on the sensitive surface was approximately 0.3 mm. A micrometer was used to determine the position of the light spot on the sensitive area, and a low-noise I/V circuit was proposed to detect the output current of the photodiodes. The I/V circuit output voltage could be acquired by a high-precision digital voltmeter. To reduce the influence of the laser output changes, a standard laser power meter was used to monitor the laser output power during the experiment. In the experiment, the light spot was always located in a sensitive area. The spectral response distributions of the two photodiodes were measured separately, and the contribution of voltage offset error to the uncertainty of visibility measurement was derived according to Koschmieder's law.Results and Discussions Two types of photodiodes, UV-0 **DQ and 2CU **, were measured. Center of the detector's sensitive area is taken as the reference point, and the effective value of the output voltage of the preamplifier is varied with the position of the light spot (Fig.3). The diameter of the spot was 1/20 of the side length of the sensitive area, which satisfied the condition that the light spot is small enough (Eq.14). According to the measured effective value of output voltage U and laser power, the spectral responsivity Rλ distribution of two photodiodes was calculated (Fig.4), and the relationship between relative deviation of the two photodetectors relative to the center responsivity and displacement of the light spot was obtained (Fig.5). The least-squares method was used to fit the changes of the output voltage and responsivity with the displacement using a quadratic polynomial. Based on the fitting equation, the relative deviation of the voltage relative to the voltage of the spot at the center was calculated when the spot drift left the center position (Fig.6). When the visibility was 2 and 10 km and baseline was 70 and 30 m, the contribution of the spot drift of the beam on the sensitive area, which is relative to the center of the detector, to the relative uncertainty of the visibility measurement was obtained (Tables 1 and 2). Measurements and calculations showed that the center of the sensitive area has the highest spectral responsivity, and the farther away from the center, the smaller the spectral responsivity acquired. Within a certain range of the center region, the spectral responsivity is relatively uniform. Therefore, when the spot drift of the beam is limited to this region, the contribution of the scale effect of photodiodes to the measurement uncertainty of visibility can be ignored. Conclusions Owing to the limitations of the sensitive area of the photodiode, the edge of the sensitive area is the recombination center of the photogenerated carriers. The region closer to the edge has a greater probability of carriers' recombination, and the quantum efficiency near the edge is minimized. Although the probability of the carriers' recombination in the center of the sensitive area is the least, the quantum efficiency in this area is the highest. Consequently, the responsivity of photoelectric conversion in different sensitive areas varies, and its distribution curve is approximately parabolic. The uniformity of photodiode photoelectric conversion has the optimal value in the center area. Therefore, for the detection system that does not require high accuracy, more accurate measurements can be achieved provided the light spot is located at the center area of the photodiode. For the visibility meter, the scale effect of photodiode has a significant impact on the uncertainty of the measurement. When designing a transmission-type visibility detection system, it is necessary to select a photodiode with a minimized scale effect. It is equally important to optimize the optical parameters of the receiver to ensure that the light spot projected on the sensitive area is small enough and center of the light spot is always located in a small area near the center of the sensitive area. A uniform beam can also be used to overlay the sensitive area of a detector to effectively suppress the contribution of the scale effect to the measurement uncertainty.

Objective Visibility measurement is not only used for weather forecasting but also widely used in aviation, voyage, highway, military, and environmental monitoring. In the visibility observation, the instrument measurement replaces manual observation. Currently, the mature visibility measuring instruments have transmission-and scattering-types. Advantage of the transmission type is that it can detect atmospheric transmittance without any assumptions about the atmospheric conditions. Owing to the large sampling volume and high accuracy, the transmission visibility meter is widely used in airports. Since several years, the World Meteorological Organization has been conducting a tracking study on measurement errors of visibility meters distributed globally. The results showed that the main reason for measurement uncertainties is the incorrect alignment of the transmitter and receiver. Errors caused by this incorrect alignment can be attributed to spot drift, which is projected by the detection beam on the sensitive area of the detector. In this paper, the influences of photodiode on the uncertainty of the visibility measurement were reported, law of the influence of spot drift on transmission visibility measurement uncertainty was found, and strategies to suppress the effects of beam drift were presented.

Methods The photodiode spectral response distribution equation was derived based on the quantum conversion efficiency. In transmission visibility meters, photodiodes with large sensitive areas are the most general choice because of their full energy utilization. However, the size of a sensitive area has its limitations; the spectral response on the sensitive area is not distributed uniformly because the edge of the sensitive area is the recombination center of photogenerated carriers, which implies that the output current of the photodiode is different when light spot with the same power beam drifts at different positions. An experimental visibility receiver setup for verification was constructed, and two photodiodes made by different manufacturers were selected. Their nominal sensitive surface area was 6.0 mm×6.0 mm, and diameter of the light spot projected on the sensitive surface was approximately 0.3 mm. A micrometer was used to determine the position of the light spot on the sensitive area, and a low-noise *I*/*V* circuit was proposed to detect the output current of the photodiodes. The *I*/*V* circuit output voltage could be acquired by a high-precision digital voltmeter. To reduce the influence of the laser output changes, a standard laser power meter was used to monitor the laser output power during the experiment. In the experiment, the light spot was always located in a sensitive area. The spectral response distributions of the two photodiodes were measured separately, and the contribution of voltage offset error to the uncertainty of visibility measurement was derived according to Koschmieder's law.

Results and Discussions Two types of photodiodes, UV-0 ^{**}DQ and 2CU ^{**}, were measured. Center of the detector's sensitive area is taken as the reference point, and the effective value of the output voltage of the preamplifier is varied with the position of the light spot (Fig.3). The diameter of the spot was 1/20 of the side length of the sensitive area, which satisfied the condition that the light spot is small enough (Eq.14). According to the measured effective value of output voltage *U* and laser power, the spectral responsivity *R _{λ}* distribution of two photodiodes was calculated (Fig.4), and the relationship between relative deviation of the two photodetectors relative to the center responsivity and displacement of the light spot was obtained (Fig.5). The least-squares method was used to fit the changes of the output voltage and responsivity with the displacement using a quadratic polynomial. Based on the fitting equation, the relative deviation of the voltage relative to the voltage of the spot at the center was calculated when the spot drift left the center position (Fig.6). When the visibility was 2 and 10 km and baseline was 70 and 30 m, the contribution of the spot drift of the beam on the sensitive area, which is relative to the center of the detector, to the relative uncertainty of the visibility measurement was obtained (Tables 1 and 2). Measurements and calculations showed that the center of the sensitive area has the highest spectral responsivity, and the farther away from the center, the smaller the spectral responsivity acquired. Within a certain range of the center region, the spectral responsivity is relatively uniform. Therefore, when the spot drift of the beam is limited to this region, the contribution of the scale effect of photodiodes to the measurement uncertainty of visibility can be ignored.

Conclusions Owing to the limitations of the sensitive area of the photodiode, the edge of the sensitive area is the recombination center of the photogenerated carriers. The region closer to the edge has a greater probability of carriers' recombination, and the quantum efficiency near the edge is minimized. Although the probability of the carriers' recombination in the center of the sensitive area is the least, the quantum efficiency in this area is the highest. Consequently, the responsivity of photoelectric conversion in different sensitive areas varies, and its distribution curve is approximately parabolic. The uniformity of photodiode photoelectric conversion has the optimal value in the center area. Therefore, for the detection system that does not require high accuracy, more accurate measurements can be achieved provided the light spot is located at the center area of the photodiode. For the visibility meter, the scale effect of photodiode has a significant impact on the uncertainty of the measurement. When designing a transmission-type visibility detection system, it is necessary to select a photodiode with a minimized scale effect. It is equally important to optimize the optical parameters of the receiver to ensure that the light spot projected on the sensitive area is small enough and center of the light spot is always located in a small area near the center of the sensitive area. A uniform beam can also be used to overlay the sensitive area of a detector to effectively suppress the contribution of the scale effect to the measurement uncertainty.

- Jun. 11, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 12 1204001 (2021)
- DOI：10.3788/CJL202148.1204001

Internal Propulsion Algorithm for Extracting Center of Line Laser Stripe

Li Weiming, Mei Feng, Hu Zeng, Gao Xingyu, and Yu Haoyong

Objective Line-structured light three-dimensional (3D) measurement system is widely used in object measurement, target detection, and other fields. The laser stripe center extraction algorithm is the key technology in the system. The center of robust and uniform laser stripe is easily extracted. However, in many industrial environments, there are some uncertain poor reflection factors, such as dirt and rust on the surface of the measured object and the bad laser emitting quality, leading to the existence of low and uneven brightness areas in the laser stripe images. Compared with the robust region, the pixel value of the robust region is close to 255 and differs greatly from the background pixel value. However, the pixel value of these low and uneven brightness areas is random and uncertain. Thus, it is not easy to select the appropriate pixel threshold accurately using the fixed threshold method. If the threshold is not selected properly, it can easily lead to a bad extraction result. Many existing algorithms cannot easily determine the appropriate pixel threshold in these areas. They tend to cause a larger calculation error and disconnection of the laser stripe centers in these areas. Thus, it is essential to improve the self-adaptability of the center extraction algorithm to solve this problem and improve robustness. At the same time, the complexity of many existing algorithms is high, and redundant scanning for the image region without laser stripe decreases the speed of the algorithms. The laser stripe center calculation is only related to the laser stripe itself and its surrounding pixels.Methods This study proposes an internal propulsion algorithm for laser stripe center extraction. First, according to the distribution characteristics of the laser stripe in the image, the algorithm uses an internal propulsion strategy to plan the search path. The search path moves forward or backward along the center of the stripe to reduce processing image regions without laser stripes; thereby, improving the computational speed of the laser stripe center. Besides, the 8-connectivity search is used to eliminate noise points when searching the starting point of internal propulsion. Second, the proposed internal propulsion algorithm is inspired by the mean shift algorithm, which is an unsupervised iterative clustering algorithm in the field of machine learning. This algorithm draws lessons from the idea of moving, updating, and ending of the mean shift algorithm. The internal propulsion algorithm also belongs to a clustering algorithm. The core process is as follows: calculate the initial center first, move forward or backward a pixel as prediction center, calculate the new threshold using the proposed adaptive threshold method, calculate the new center point, update the center point, and repeat until the end. Among them, the calculation of the center uses the geometric center method. The geometric center method extracts the laser stripe center using the geometric centroid of the upper and lower boundaries of the laser stripe cross-section. The stripe boundary is determined by the pixel threshold. The proposed algorithm uses the improved Otsu adaptive threshold method to update the threshold during internal propulsion continuously. The traditional Otsu method is a global threshold method. However, the global threshold method is not easy to consider the detailed segmentation, and it requires high calculation. In this study, the traditional Otsu method is improved to extract the laser stripe center. Let the global threshold become multi-threshold, and the maximum between-class variance is computed for each column pixel of the laser stripe within a limited length to determine the threshold so that each column of the laser stripe has an optimal center calculation threshold.Results and Discussions The proposed algorithm uses an internal propulsion strategy to plan the search path and reduce processing image regions without laser stripes. Thus, it improves the computational speed of the laser stripe center (Table 3). The improved Otsu adaptive threshold method overcomes the inappropriateness of the global threshold to detailed segmentation and reduces computational effort. Besides, it makes the laser stripe center extraction more robust. Using the improved Otsu adaptive threshold method, the proposed algorithm significantly improves the robustness and accuracy of center extraction. It solves the problem of disconnection in the low and uneven brightness stripe areas (Fig.13). It also reduces center extraction error, especially in low and uneven brightness areas (Tables 1 and 2). Finally, the proposed algorithm has good anti-noise properties when adding noise to the experiment images (Table 4).Conclusions This study proposes an internal propulsion algorithm for laser stripe center extraction. The experimental results show that the proposed algorithm has low complexity, fast running speed, good robustness, and high accuracy. The proposed algorithm achieves excellent anti-noise effect after adding noise points. In particular, the internal propulsion algorithm can reduce processing image regions without laser stripes and has a good extraction effect on the non-robust laser stripe with low and uneven brightness areas. Owing to these advantages, the proposed algorithm will be of great significance in many industrial applications, such as online product inspection and welding seam tracing, especially when fast speed and good robustness are required in real working conditions where the reflectivity of the object surface is complicated and bad.

Objective Line-structured light three-dimensional (3D) measurement system is widely used in object measurement, target detection, and other fields. The laser stripe center extraction algorithm is the key technology in the system. The center of robust and uniform laser stripe is easily extracted. However, in many industrial environments, there are some uncertain poor reflection factors, such as dirt and rust on the surface of the measured object and the bad laser emitting quality, leading to the existence of low and uneven brightness areas in the laser stripe images. Compared with the robust region, the pixel value of the robust region is close to 255 and differs greatly from the background pixel value. However, the pixel value of these low and uneven brightness areas is random and uncertain. Thus, it is not easy to select the appropriate pixel threshold accurately using the fixed threshold method. If the threshold is not selected properly, it can easily lead to a bad extraction result. Many existing algorithms cannot easily determine the appropriate pixel threshold in these areas. They tend to cause a larger calculation error and disconnection of the laser stripe centers in these areas. Thus, it is essential to improve the self-adaptability of the center extraction algorithm to solve this problem and improve robustness. At the same time, the complexity of many existing algorithms is high, and redundant scanning for the image region without laser stripe decreases the speed of the algorithms. The laser stripe center calculation is only related to the laser stripe itself and its surrounding pixels.

Methods This study proposes an internal propulsion algorithm for laser stripe center extraction. First, according to the distribution characteristics of the laser stripe in the image, the algorithm uses an internal propulsion strategy to plan the search path. The search path moves forward or backward along the center of the stripe to reduce processing image regions without laser stripes; thereby, improving the computational speed of the laser stripe center. Besides, the 8-connectivity search is used to eliminate noise points when searching the starting point of internal propulsion. Second, the proposed internal propulsion algorithm is inspired by the mean shift algorithm, which is an unsupervised iterative clustering algorithm in the field of machine learning. This algorithm draws lessons from the idea of moving, updating, and ending of the mean shift algorithm. The internal propulsion algorithm also belongs to a clustering algorithm. The core process is as follows: calculate the initial center first, move forward or backward a pixel as prediction center, calculate the new threshold using the proposed adaptive threshold method, calculate the new center point, update the center point, and repeat until the end. Among them, the calculation of the center uses the geometric center method. The geometric center method extracts the laser stripe center using the geometric centroid of the upper and lower boundaries of the laser stripe cross-section. The stripe boundary is determined by the pixel threshold. The proposed algorithm uses the improved Otsu adaptive threshold method to update the threshold during internal propulsion continuously. The traditional Otsu method is a global threshold method. However, the global threshold method is not easy to consider the detailed segmentation, and it requires high calculation. In this study, the traditional Otsu method is improved to extract the laser stripe center. Let the global threshold become multi-threshold, and the maximum between-class variance is computed for each column pixel of the laser stripe within a limited length to determine the threshold so that each column of the laser stripe has an optimal center calculation threshold.

Results and Discussions The proposed algorithm uses an internal propulsion strategy to plan the search path and reduce processing image regions without laser stripes. Thus, it improves the computational speed of the laser stripe center (Table 3). The improved Otsu adaptive threshold method overcomes the inappropriateness of the global threshold to detailed segmentation and reduces computational effort. Besides, it makes the laser stripe center extraction more robust. Using the improved Otsu adaptive threshold method, the proposed algorithm significantly improves the robustness and accuracy of center extraction. It solves the problem of disconnection in the low and uneven brightness stripe areas (Fig.13). It also reduces center extraction error, especially in low and uneven brightness areas (Tables 1 and 2). Finally, the proposed algorithm has good anti-noise properties when adding noise to the experiment images (Table 4).

Conclusions This study proposes an internal propulsion algorithm for laser stripe center extraction. The experimental results show that the proposed algorithm has low complexity, fast running speed, good robustness, and high accuracy. The proposed algorithm achieves excellent anti-noise effect after adding noise points. In particular, the internal propulsion algorithm can reduce processing image regions without laser stripes and has a good extraction effect on the non-robust laser stripe with low and uneven brightness areas. Owing to these advantages, the proposed algorithm will be of great significance in many industrial applications, such as online product inspection and welding seam tracing, especially when fast speed and good robustness are required in real working conditions where the reflectivity of the object surface is complicated and bad.

- Jun. 08, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 11 1104002 (2021)
- DOI：10.3788/CJL202148.1104002

Method for Compensating Thermal Deformation Error in Transfer Station of Variable Curvature Workpiece

Li Zhuyue, Hou Maosheng, Liu Zhichao, and Li Lijuan

Objective Large-size measurements are usually accomplished through multistation measurements. The accuracy of the instrument, environmental vibration, temperature change, and other factors severely affect the accuracy of the transfer station. To improve the accuracy of large-size measurements, many scholars have conducted extensive research on large-size measurement errors and proposed numerous compensation methods to achieve the accuracy of large-size measurements. However, most compensation methods compensate for the measurement errors by reducing the measurement errors. Moreover, research on the effect of temperature on the measurement results is relatively rare. In fact, the error caused by changes in temperature has severe impacts on the measurement accuracy. Additionally, the experimental steps of most temperature compensation methods are complicated, the model is difficult to solve, and the research object is a regular rectangular or wedge-shaped tooling structure; these disadvantages limit the application of such methods. No study has discussed the characteristics of the variable curvature workpiece, owing to which these compensations exhibit strong limitations in improving the accuracy of large-size transfer stations. To ensure the manufacturing and assembly accuracy of large-size variable curvature components on the actual site and decrease the large-scale measurement cycle, a new method for compensating thermal deformation errors of large-scale transfer stations is proposed and numerous related experiments are designed. Experimental results show that the proposed method can effectively improve the transfer accuracy of large-scale measurements.Methods The new method for compensating thermal deformation errors of large-size transfer stations is a universal method. First, several temperature values and reference point coordinates are obtained using the fiber Bragg grating temperature sensor and laser tracker. Then, all the measured coordinates are converted into the global coordinate system using the station alignment method and the changes in temperature at different stations are calculated. Next, the linear regression analysis method is used to establish a mathematical model between the thermal deformation of the reference point and the change in temperature using these recorded experimental data. The linear regression analysis method is also used to calculate the thermal deformation coefficient matrix in the global coordinate system. Then, ANSYS is used to simulate the thermal deformation. The comparison and analysis of the simulated thermal deformation offset data of the reference point with the offset obtained using the measurement confirm the consistency of the two trends. Finally, the coordinates of the reference point in the subsequent measurements are obtained using the mathematical model of thermal deformation offset established using the following compensation method: the compensated theoretical coordinates are aligned with the actual measured values for the transfer station and the transfer station error caused by thermal deformation is compensated during the measurement of the large-size variable curvature transfer station.Results and Discussions To verify that the proposed method for compensating thermal deformation errors of the large-size transfer station can improve the accuracy of the transfer station, multiple sets of uniform temperature field transfer station experiments are designed. According to the transfer results (Table 2), after the simulation thermal compensation, the maximum point error of the four transfers is reduced by 50%--65% compared with the uncompensated one. Moreover, the overall transfer accuracy is improved by 63%--76%. After the thermal deformation coefficient compensation, the maximum point error of the four transfers is reduced by more than 70% compared with the uncompensated one. Additionally, the overall transfer accuracy is increased by more than 80%. Subsequently, a nonuniform temperature field experiment is designed. According to the error table of the nonuniform temperature field transfer station (Table 4), the transfer station error formed when directly transferred to the station is extremely large, with a total error of 1409 μm. After the compensation, the transfer station error is reduced by 68.87%. After the thermal deformation coefficient compensation, the transfer station error is reduced by 76.26%. These results can be observed more intuitively using the error map of the nonuniform temperature field transfer station (Fig. 9). After the compensation, the transfer error of each point is considerably reduced. The above experimental results show that after the thermal deformation coefficient compensation, the accuracy of the transfer station is considerably improved and the efficiency is significantly higher than that of the simulation compensation method.Conclusions This work proposes a new method for compensating thermal deformation errors of large-size transfer stations. Based on the experimentally measured temperature and reference point coordinates, the mathematical relationship between changes in temperature and reference point offsets is established; then, ANSYS is applied to component physics. The model is simulated, and a mathematical model for thermal deformation error compensation is established. By comparing the experimental measurement data with the finite element simulation data, the consistency between the deformation of the simulation and experimental reference points is confirmed. Finally, according to the thermal compensation mathematical model, the reference point offset when the temperature changes is reversed and the difference between the reference point coordinates under the influence of temperature is compensated. Furthermore, the actual transfer station experiment is designed and the proposed method is used to compensate the offset of the reference point coordinates measured in the uniform temperature field, improving the accuracy of the transfer station by 81.28%. This proves that the proposed method affects the temperature. The effectiveness and superiority of compensation are verified. Finally, the temperature of the experimental site is changed, large-size space assembly site is simulated, nonuniform temperature field measurement experiment is conducted, and thermal deformation offset of the the reference point coordinates is compensated at different temperatures using the proposed method. The accuracy of the transfer station is improved by 76.26%, which is considerably higher than that of the simulation compensation. These findings confirm that the proposed method is beneficial for improving the accuracy of the transfer station and has important engineering practical significance for transfer station measurements of large-size variable curvature components.

Objective Large-size measurements are usually accomplished through multistation measurements. The accuracy of the instrument, environmental vibration, temperature change, and other factors severely affect the accuracy of the transfer station. To improve the accuracy of large-size measurements, many scholars have conducted extensive research on large-size measurement errors and proposed numerous compensation methods to achieve the accuracy of large-size measurements. However, most compensation methods compensate for the measurement errors by reducing the measurement errors. Moreover, research on the effect of temperature on the measurement results is relatively rare. In fact, the error caused by changes in temperature has severe impacts on the measurement accuracy. Additionally, the experimental steps of most temperature compensation methods are complicated, the model is difficult to solve, and the research object is a regular rectangular or wedge-shaped tooling structure; these disadvantages limit the application of such methods. No study has discussed the characteristics of the variable curvature workpiece, owing to which these compensations exhibit strong limitations in improving the accuracy of large-size transfer stations. To ensure the manufacturing and assembly accuracy of large-size variable curvature components on the actual site and decrease the large-scale measurement cycle, a new method for compensating thermal deformation errors of large-scale transfer stations is proposed and numerous related experiments are designed. Experimental results show that the proposed method can effectively improve the transfer accuracy of large-scale measurements.

Methods The new method for compensating thermal deformation errors of large-size transfer stations is a universal method. First, several temperature values and reference point coordinates are obtained using the fiber Bragg grating temperature sensor and laser tracker. Then, all the measured coordinates are converted into the global coordinate system using the station alignment method and the changes in temperature at different stations are calculated. Next, the linear regression analysis method is used to establish a mathematical model between the thermal deformation of the reference point and the change in temperature using these recorded experimental data. The linear regression analysis method is also used to calculate the thermal deformation coefficient matrix in the global coordinate system. Then, ANSYS is used to simulate the thermal deformation. The comparison and analysis of the simulated thermal deformation offset data of the reference point with the offset obtained using the measurement confirm the consistency of the two trends. Finally, the coordinates of the reference point in the subsequent measurements are obtained using the mathematical model of thermal deformation offset established using the following compensation method: the compensated theoretical coordinates are aligned with the actual measured values for the transfer station and the transfer station error caused by thermal deformation is compensated during the measurement of the large-size variable curvature transfer station.

Results and Discussions To verify that the proposed method for compensating thermal deformation errors of the large-size transfer station can improve the accuracy of the transfer station, multiple sets of uniform temperature field transfer station experiments are designed. According to the transfer results (Table 2), after the simulation thermal compensation, the maximum point error of the four transfers is reduced by 50%--65% compared with the uncompensated one. Moreover, the overall transfer accuracy is improved by 63%--76%. After the thermal deformation coefficient compensation, the maximum point error of the four transfers is reduced by more than 70% compared with the uncompensated one. Additionally, the overall transfer accuracy is increased by more than 80%. Subsequently, a nonuniform temperature field experiment is designed. According to the error table of the nonuniform temperature field transfer station (Table 4), the transfer station error formed when directly transferred to the station is extremely large, with a total error of 1409 μm. After the compensation, the transfer station error is reduced by 68.87%. After the thermal deformation coefficient compensation, the transfer station error is reduced by 76.26%. These results can be observed more intuitively using the error map of the nonuniform temperature field transfer station (Fig. 9). After the compensation, the transfer error of each point is considerably reduced. The above experimental results show that after the thermal deformation coefficient compensation, the accuracy of the transfer station is considerably improved and the efficiency is significantly higher than that of the simulation compensation method.

Conclusions This work proposes a new method for compensating thermal deformation errors of large-size transfer stations. Based on the experimentally measured temperature and reference point coordinates, the mathematical relationship between changes in temperature and reference point offsets is established; then, ANSYS is applied to component physics. The model is simulated, and a mathematical model for thermal deformation error compensation is established. By comparing the experimental measurement data with the finite element simulation data, the consistency between the deformation of the simulation and experimental reference points is confirmed. Finally, according to the thermal compensation mathematical model, the reference point offset when the temperature changes is reversed and the difference between the reference point coordinates under the influence of temperature is compensated. Furthermore, the actual transfer station experiment is designed and the proposed method is used to compensate the offset of the reference point coordinates measured in the uniform temperature field, improving the accuracy of the transfer station by 81.28%. This proves that the proposed method affects the temperature. The effectiveness and superiority of compensation are verified. Finally, the temperature of the experimental site is changed, large-size space assembly site is simulated, nonuniform temperature field measurement experiment is conducted, and thermal deformation offset of the the reference point coordinates is compensated at different temperatures using the proposed method. The accuracy of the transfer station is improved by 76.26%, which is considerably higher than that of the simulation compensation. These findings confirm that the proposed method is beneficial for improving the accuracy of the transfer station and has important engineering practical significance for transfer station measurements of large-size variable curvature components.

- May. 21, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 11 1104003 (2021)
- DOI：10.3788/CJL202148.1104003

Photo-Elastic Modulation Based on Adaptive Regulation of Driving Voltage

Liang Zhenkun, Li Xiao, Wang Zhibin, Li Kewu, and Zang Xiaoyang

Objective With the advantages of high modulation precision, high efficiency, and wide spectral range, photo-elastic modulation technology has attracted considerable attention in many fields, such as optical communication, polarization analysis, and spectral analysis. A series of devices with a photo-elastic modulator as the core are widely used in material detection, scientific research, aerospace, and national defense construction. The basic principle of the photo-elastic modulator is to use the anisotropic crystal of a piezoelectric crystal to provide the external mechanical force, so as to make the photo-elastic crystal produce birefringence. It is a thermal-mechanical-electrical coupling resonant device consisting of photo-elastic and electro-optic crystals. In the working process, the driving circuit provides a high-voltage sine wave signal for the piezoelectric crystal in the vertical direction, making the piezoelectric crystal vibrate horizontally and causing the elastic crystal to deform accordingly, therefore producing periodic birefringence to achieve phase modulation of incident light. The phase modulation amplitude is positively correlated with the voltage amplitude output by the drive circuit. The driving voltage amplitude is positively correlated. When the amplitudes of the driving voltage and phase modulation increase, the piezoelectric quartz and photo-elastic crystal consisting of the photo-elastic modulator will produce thermal loss during mechanical vibration. This will result in the shift of the resonance frequency, making it become a challenge to keep the optical path difference consistent for a long time, which reduces the photo-elastic modulator's stability and modulation efficiency. Besides, the change in the external environment will reduce the modulator's stability. Thus, the stability technology of the photo-elastic modulator plays a significant role.Methods In this study, starting from the structure of the photo-elastic modulator, according to the vibration model of the photo-elastic modulator, the influencing factors of the temperature drift of the photo-elastic modulator are analyzed. The conventional optical system with a photo-elastic modulator as the core is developed. According to the Stokes parameter, Muller matrix, and digital phase-locking technology, the relationship between the driving signal and the phase modulation amplitude is finally obtained, and the correlation function diagram is drawn. Thus, a drive voltage adaptive adjustment method based on a field-programmable gate array (FPGA) is proposed. Direct digital synthesis (DDS) technology is used to control the square wave signal of a photo-elastic modulator. After being amplified by the LC resonant circuit, a high-voltage sine wave is generated to drive the photo-elastic modulator. The incident light is modulated by the photo-elastic modulator and received by the detector. The optical signal is converted into an electrical signal, which is transmitted to FPGA through an analog-to-digital sampling module. By using digital phase-locked technology, we multiply and accumulate the acquisition signal with the quadruple frequency reference signal in the memory to obtain the quadruple frequency correlation component in the acquisition signal, and similarly multiply and accumulate the acquisition signal and the double frequency reference signal to obtain the double frequency correlation component. The relationship between the ratio of the quadruple frequency component to the double frequency component and the phase modulation amplitude is analyzed. The duty cycle is adjusted in real-time according to the ratio of the quadruple and second harmonic frequencies in each cycle. If the ratio obtained in the next cycle is smaller than that in the previous cycle, the duty cycle will increase; otherwise, it will decrease. Finally, the phase modulation amplitude will be stabilized at a fixed range. In the system, two key points need to be determined in advance. One is the resonant frequency of the photo-elastic modulator, and the other is the unit adjustment of the driving square wave duty cycle. Since the external temperature and thermal effect of the photo-elastic modulator affect the change of its resonant frequency, it is essential to determine the resonant frequency of the photo-elastic modulator in the test environment first. The key to the single adjustment of the duty cycle in the feedback control is that, to prevent excessive feedback regulation, and it is essential to ensure that the duty cycle adjustment amplitude is less than the setting value of the phase modulation amplitude. However, the duty cycle adjustment amplitude should be larger than the fluctuation of the phase amplitude after temperature drift, therefore preventing the lack of control due to the lack of amplitude modulation. At room temperature of 25 ℃, the photo-elastic modulator was powered for 15 min. According to the sweep frequency test, the resonant frequency under current temperature is determined. The two tests are conducted to determine the conversion relationship among the duty cycle, driving voltage, and phase modulation amplitude, so as to determine the regulation value of the duty cycle unit.Results and Discussions The stability of the system is tested by experiments. Compared with the previous studies (Table 1), the accuracy has been significantly improved. When the incident light wavelength is 632.8 nm, the phase modulation amplitude accuracy is 0.82% at the half-wave state and 0.44% at the quarter-wave state [Fig. 11 (a) and Fig. 12]. The constant temperature control method is more stringent for the device and environment. The frequency control method is not conducive for data processing after application. The voltage regulation has none of the above shortcomings, and the accuracy is higher than that of the above two methods.Conclusions Based on the temperature drift model and digital phase-locked technology, the phase modulation amplitude stability control of the photo-elastic modulator is realized. The experimental results show that the voltage self-regulation method has higher accuracy, a wide application range, and more convenient for subsequent data processing than the existing temperature control and frequency regulation methods. It has important theoretical significance for improving the accuracy and reliability evaluation of phase modulation amplitude control system.

Objective With the advantages of high modulation precision, high efficiency, and wide spectral range, photo-elastic modulation technology has attracted considerable attention in many fields, such as optical communication, polarization analysis, and spectral analysis. A series of devices with a photo-elastic modulator as the core are widely used in material detection, scientific research, aerospace, and national defense construction. The basic principle of the photo-elastic modulator is to use the anisotropic crystal of a piezoelectric crystal to provide the external mechanical force, so as to make the photo-elastic crystal produce birefringence. It is a thermal-mechanical-electrical coupling resonant device consisting of photo-elastic and electro-optic crystals. In the working process, the driving circuit provides a high-voltage sine wave signal for the piezoelectric crystal in the vertical direction, making the piezoelectric crystal vibrate horizontally and causing the elastic crystal to deform accordingly, therefore producing periodic birefringence to achieve phase modulation of incident light. The phase modulation amplitude is positively correlated with the voltage amplitude output by the drive circuit. The driving voltage amplitude is positively correlated. When the amplitudes of the driving voltage and phase modulation increase, the piezoelectric quartz and photo-elastic crystal consisting of the photo-elastic modulator will produce thermal loss during mechanical vibration. This will result in the shift of the resonance frequency, making it become a challenge to keep the optical path difference consistent for a long time, which reduces the photo-elastic modulator's stability and modulation efficiency. Besides, the change in the external environment will reduce the modulator's stability. Thus, the stability technology of the photo-elastic modulator plays a significant role.

Methods In this study, starting from the structure of the photo-elastic modulator, according to the vibration model of the photo-elastic modulator, the influencing factors of the temperature drift of the photo-elastic modulator are analyzed. The conventional optical system with a photo-elastic modulator as the core is developed. According to the Stokes parameter, Muller matrix, and digital phase-locking technology, the relationship between the driving signal and the phase modulation amplitude is finally obtained, and the correlation function diagram is drawn. Thus, a drive voltage adaptive adjustment method based on a field-programmable gate array (FPGA) is proposed. Direct digital synthesis (DDS) technology is used to control the square wave signal of a photo-elastic modulator. After being amplified by the LC resonant circuit, a high-voltage sine wave is generated to drive the photo-elastic modulator. The incident light is modulated by the photo-elastic modulator and received by the detector. The optical signal is converted into an electrical signal, which is transmitted to FPGA through an analog-to-digital sampling module. By using digital phase-locked technology, we multiply and accumulate the acquisition signal with the quadruple frequency reference signal in the memory to obtain the quadruple frequency correlation component in the acquisition signal, and similarly multiply and accumulate the acquisition signal and the double frequency reference signal to obtain the double frequency correlation component. The relationship between the ratio of the quadruple frequency component to the double frequency component and the phase modulation amplitude is analyzed. The duty cycle is adjusted in real-time according to the ratio of the quadruple and second harmonic frequencies in each cycle. If the ratio obtained in the next cycle is smaller than that in the previous cycle, the duty cycle will increase; otherwise, it will decrease. Finally, the phase modulation amplitude will be stabilized at a fixed range. In the system, two key points need to be determined in advance. One is the resonant frequency of the photo-elastic modulator, and the other is the unit adjustment of the driving square wave duty cycle. Since the external temperature and thermal effect of the photo-elastic modulator affect the change of its resonant frequency, it is essential to determine the resonant frequency of the photo-elastic modulator in the test environment first. The key to the single adjustment of the duty cycle in the feedback control is that, to prevent excessive feedback regulation, and it is essential to ensure that the duty cycle adjustment amplitude is less than the setting value of the phase modulation amplitude. However, the duty cycle adjustment amplitude should be larger than the fluctuation of the phase amplitude after temperature drift, therefore preventing the lack of control due to the lack of amplitude modulation. At room temperature of 25 ℃, the photo-elastic modulator was powered for 15 min. According to the sweep frequency test, the resonant frequency under current temperature is determined. The two tests are conducted to determine the conversion relationship among the duty cycle, driving voltage, and phase modulation amplitude, so as to determine the regulation value of the duty cycle unit.

Results and Discussions The stability of the system is tested by experiments. Compared with the previous studies (Table 1), the accuracy has been significantly improved. When the incident light wavelength is 632.8 nm, the phase modulation amplitude accuracy is 0.82% at the half-wave state and 0.44% at the quarter-wave state [Fig. 11 (a) and Fig. 12]. The constant temperature control method is more stringent for the device and environment. The frequency control method is not conducive for data processing after application. The voltage regulation has none of the above shortcomings, and the accuracy is higher than that of the above two methods.

Conclusions Based on the temperature drift model and digital phase-locked technology, the phase modulation amplitude stability control of the photo-elastic modulator is realized. The experimental results show that the voltage self-regulation method has higher accuracy, a wide application range, and more convenient for subsequent data processing than the existing temperature control and frequency regulation methods. It has important theoretical significance for improving the accuracy and reliability evaluation of phase modulation amplitude control system.

- May. 21, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 11 1104001 (2021)
- DOI：10.3788/CJL202148.1104001

Orientation Method of Distributed Measurement System Based on Hierarchical Geometric Constraints

Lin Jiarui, Yu Jizhu, Yang Linghui, Zhang Rao, and Zhu Jigui

Objective With the advantage of the characteristic of parallel multitask and space expandability, a large-scale distributed measurement system is widely used in mechanical manufacturing. Based on the intersection-positioning mechanism of multisource observations, large-scale distributed measurement system constructs an integrated measurement network. The positioning of the system determines the performance and applicability of the whole network. The precision and quantity of orientation constraints are easily restricted by on-site conditions and environment. Represented by the angle intersection measurement system, typical distributed measurement systems consist of multiple vision measurement systems, theodolite measurement system, and workshop Measurement-Positioning System (wMPS). Considering the relative position and postural relationship among the measurement units as the optimization parameters, the objective function of the orientation process is established through redundant geometric constraints and the intersection relation between measurement units and the measured points. The final orientation parameters are obtained by the optimization method. In the conventional method, geometric constraints are constructed by auxiliary equipment, which is disadvantageous to use in complex occlusion environment.Methods Solving the under-constraint problem for the distributed measurement system, an orientation method based on angle and length constraints is presented. It combines a high-precision dual-axis inclinometer with measurement units of distributed measurement systems. The proposed orientation method is deduced based on the wMPS. First, the general orientation principle of the distributed measurement system is established. The proposed orientation method is developed based on the general orientation principle. Second, a new orientation method is built by an dual-axis inclinometer and length constraints supported by the scalar bar. A relative postural orientation model is established according to the horizontal constraints supported by an inclinometer. The proposed method depends on the relative posture between the inclinometer and wMPS transmitter, which can be obtained by precalibration. Using internal angle constraints provided by a high-precision angle measuring instrument, the orientation parameters are processed by the multihierarchical stage, and the number of length constraints for the orientation is decreased, which can improve the orientation efficiency. Considering wMPS as the experimental platform, the combined system prototype is developed to investigate the influence of the number and distribution of orientation conditions on the orientation accuracy and robustness of the orientation method.Results and Discussions To analyze the influence of the measuring error of inclinometer on the orientation method, some simulations are performed. In the simulation, a space of 10 m×2 m×2 m away from wMPS transmitter is used as orientation and measurement space. The measuring error of inclinometer set in simulation is 2″. The point measuring error is less than 0.1 mm (Fig.6), and length measuring error is less than 0.2 mm (Fig.7). A series of experiments are conducted. The angle resolution of high-precision dual-axis inclinometer used in the experiment is 0.0001°. The angle measurement accuracy is greater than 2″. The experimental site consists of several wMPS transmitters using a built-in inclinometer and a scalar bar. The two ends of the scalar bar are compatible with the same size spherically-mounted retroreflector and wMPS receiver. The length of the scalar bar is calibrated by the laser tracker. To analyze the influence of the horizontal constraints on the orientation result, the number of length constraints used in orientation is gradually increased from 2 to 12, and the orientation method based and not based on horizontal constraints are used to implement the orientation process, respectively. The same scalar bar is used to verify the accuracy of different positions in the measurement space 10 times. Besides, the average length measuring error is used as the evaluation index. Using two positions for orientation, the length measuring error of the conventional method (not based on horizontal constraints) is greater than 9 mm and unstable. However, the proposed method length measuring error is 0.56 mm, which is consistent with the conventional method using five positions. With the increase in the number of length constraints, the length measuring error of the two methods decreases and tends to be stable. When the number of length constraints exceeds eight, the conventional method length measuring error is close to that of the proposed method that is less than 0.2 mm (Fig.9). To verify the robustness of the proposed method, keeping the number of length constraints used in the orientation unchanged as 2, change the placement form of the scalar bar, and perform the orientation experiment based on the proposed method. After orientation, the same scalar bar is measured to test the accuracy of the system. The standard deviation of the length measuring error with 10 positions is better than 0.3 mm (Fig.10), indicating the orientation method robustness.Conclusions In this study, an orientation method of a distributed measurement system based on hierarchical geometric constraints is investigated. Using the horizontal geometric and length constraints, the number of constraints required by the orientation model is effectively decreased. Finally, the effectiveness and adaptability of the proposed method are verified using wMPS as an experimental platform. In a complex environment, when the orientation conditions are limited, a new method can meet the measurement needs of the industrial field and has a certain application prospect.

Objective With the advantage of the characteristic of parallel multitask and space expandability, a large-scale distributed measurement system is widely used in mechanical manufacturing. Based on the intersection-positioning mechanism of multisource observations, large-scale distributed measurement system constructs an integrated measurement network. The positioning of the system determines the performance and applicability of the whole network. The precision and quantity of orientation constraints are easily restricted by on-site conditions and environment. Represented by the angle intersection measurement system, typical distributed measurement systems consist of multiple vision measurement systems, theodolite measurement system, and workshop Measurement-Positioning System (wMPS). Considering the relative position and postural relationship among the measurement units as the optimization parameters, the objective function of the orientation process is established through redundant geometric constraints and the intersection relation between measurement units and the measured points. The final orientation parameters are obtained by the optimization method. In the conventional method, geometric constraints are constructed by auxiliary equipment, which is disadvantageous to use in complex occlusion environment.

Methods Solving the under-constraint problem for the distributed measurement system, an orientation method based on angle and length constraints is presented. It combines a high-precision dual-axis inclinometer with measurement units of distributed measurement systems. The proposed orientation method is deduced based on the wMPS. First, the general orientation principle of the distributed measurement system is established. The proposed orientation method is developed based on the general orientation principle. Second, a new orientation method is built by an dual-axis inclinometer and length constraints supported by the scalar bar. A relative postural orientation model is established according to the horizontal constraints supported by an inclinometer. The proposed method depends on the relative posture between the inclinometer and wMPS transmitter, which can be obtained by precalibration. Using internal angle constraints provided by a high-precision angle measuring instrument, the orientation parameters are processed by the multihierarchical stage, and the number of length constraints for the orientation is decreased, which can improve the orientation efficiency. Considering wMPS as the experimental platform, the combined system prototype is developed to investigate the influence of the number and distribution of orientation conditions on the orientation accuracy and robustness of the orientation method.

Results and Discussions To analyze the influence of the measuring error of inclinometer on the orientation method, some simulations are performed. In the simulation, a space of 10 m×2 m×2 m away from wMPS transmitter is used as orientation and measurement space. The measuring error of inclinometer set in simulation is 2″. The point measuring error is less than 0.1 mm (Fig.6), and length measuring error is less than 0.2 mm (Fig.7). A series of experiments are conducted. The angle resolution of high-precision dual-axis inclinometer used in the experiment is 0.0001°. The angle measurement accuracy is greater than 2″. The experimental site consists of several wMPS transmitters using a built-in inclinometer and a scalar bar. The two ends of the scalar bar are compatible with the same size spherically-mounted retroreflector and wMPS receiver. The length of the scalar bar is calibrated by the laser tracker. To analyze the influence of the horizontal constraints on the orientation result, the number of length constraints used in orientation is gradually increased from 2 to 12, and the orientation method based and not based on horizontal constraints are used to implement the orientation process, respectively. The same scalar bar is used to verify the accuracy of different positions in the measurement space 10 times. Besides, the average length measuring error is used as the evaluation index. Using two positions for orientation, the length measuring error of the conventional method (not based on horizontal constraints) is greater than 9 mm and unstable. However, the proposed method length measuring error is 0.56 mm, which is consistent with the conventional method using five positions. With the increase in the number of length constraints, the length measuring error of the two methods decreases and tends to be stable. When the number of length constraints exceeds eight, the conventional method length measuring error is close to that of the proposed method that is less than 0.2 mm (Fig.9). To verify the robustness of the proposed method, keeping the number of length constraints used in the orientation unchanged as 2, change the placement form of the scalar bar, and perform the orientation experiment based on the proposed method. After orientation, the same scalar bar is measured to test the accuracy of the system. The standard deviation of the length measuring error with 10 positions is better than 0.3 mm (Fig.10), indicating the orientation method robustness.

Conclusions In this study, an orientation method of a distributed measurement system based on hierarchical geometric constraints is investigated. Using the horizontal geometric and length constraints, the number of constraints required by the orientation model is effectively decreased. Finally, the effectiveness and adaptability of the proposed method are verified using wMPS as an experimental platform. In a complex environment, when the orientation conditions are limited, a new method can meet the measurement needs of the industrial field and has a certain application prospect.

- May. 17, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 9 0904001 (2021)
- DOI：10.3788/CJL202148.0904001

Dual-Hole Point Diffraction Interferometer for Measuring the Wavefront Aberration of an Imaging System

Feng Peng, Tang Feng, Wang Xiangzhao, Lu Yunjun, Xu Jinghao, Guo Fudong, and Zhang Guoxian

Objective Wavefront aberration describes the properties of a small-aberration imaging optical system. In a high-quality microscope objective lens and space telescope, the wavefront errors should be within λ/14 RMS (where λ is the operational wavelength and RMS is the root mean square value). To meet required wavefront quality of the optical systems for extreme ultraviolet lithography, the error must be less than 0.45 nm RMS. Therefore, wavefront measurements are highly demanded. At present, wavefronts are typically measured by Hartmann sensors, Fizeau interferometers, Twyman-Green interferometers, shearing interferometry, or point-diffraction interferometry. The Shark-Hartmann sensor covers a large measurement range and can quickly measure the wavefront, but with lower resolution than interferometry. The Fizeau and Twyman-Green interferometers cannot measure to higher accuracy than their standard lenses, and cannot be installed in systems with limited space.In the present study, we report a phase-shifting point-diffraction interferometer with several advantages: high optical field uniformity, high measurable numerical aperture, and a quasi-common optical path. The optical signals are transmitted through single-mode fibers that improve the flexibility of the interferometer system. Our results are anticipated to assist wavefront-aberration detection in high-precision photolithographic projection lenses.Methods We developed a dual-hole point diffraction interferometer (DHPDI) based on a dual-fiber optical path. First, we designed the measuring principle of the interferometer. The interferometer uses a diode-pumped solid-state laser with multi-longitudinal modes. The laser operating wavelength is 532 nm and the coherence length is several centimeters. The two laser beams form a quasi-common optical path interferometer structure. The intensities of the beams are controlled by interference arms connected with adjustable attenuators, one of which is connected to a phase shifter. The two single-mode optical fibers of the object surface output two beams of coherent light. The end faces are imaged by the lens at two pinholes of the object surface mask, and are filtered by the pinholes to form two standard spherical-wavefront illumination imaging systems. One wavefront becomes the measurement wavefront and the other becomes the reference wavefront through the imaging system to be measured. The two beams overlap and produce an interference pattern at a charge-coupled device camera. The wavefront phase map is measured using a phase-shifting method. In the experiments, a DHPDI and a dual-fiber point-diffraction interferometer (DFPDI) were set up to detect the same projection objective lens. The experimental results were analyzed and the measurement results of both interferometers were compared to verify the effectiveness of the DHPDI.Results and Discussions This paper proposes our DHPDI for measuring wavefront aberrations of imaging systems. Its advantages are high optical field uniformity, a high measurable numerical aperture, a quasi-common optical path, and a phase-shift element besides the imaging optical path of the system (Fig. 1). The DHPDI is designed with two measurement modes: point-diffraction measurement mode and system-errors measurement mode (Fig. 2). In point-diffraction mode, the DHPDI measures the geometric optical path error and detector tilt error of the test light and point-diffraction light, which mainly appear as coma aberration and astigmatism, respectively. These geometric optical path differences can be quickly and conveniently calibrated in system-errors mode. Both measurement modes can be used together for high-precision detection of wave aberrations in the imaging system. We constructed a DHPDI system that measures the wavefront aberration of a 5× demagnification projection objective lens with a numerical aperture of 0.3, and supplied it with a 532 nm laser (see Methods for laser details). The DHPDI was verified in experiments (Figs. 3 and 4), and its results were compared with those of the DFPDI (Figs. 5--7). The experimental results confirmed the theoretical deviation. When detecting the wavefront aberrations of the same projection objective lens, both measurement methods gave nearly consistent wavefront distributions, with a relative error of 0.07 nm RMS.Conclusions We have demonstrated an advanced DHPDI. With a pinhole diameter of 700 nm, the deviation of the diffracted wavefront from spherical meets the requirements of wave aberration detection in high-precision imaging systems. The optical signals are transmitted through single-mode fibers, enabling a flexible interferometer system. The DHPDI also allows convenient adjustment of the interference contrast and phase-shifting outside the imaging optical path.We then constructed DHPDI and DFPDI systems for measuring the wavefront aberration of a 5× projection objective lens with a numerical aperture of 0.3. In both modes, the contrast in the interferogram exceeded 65%. Moreover, the intensity uniformity of the interferogram in DHPDI was approximately twice that in DFPDI. Such uniform intensity can improve the accuracy of pupil-edge detection. The relative error of the wavefront distribution of the two detection results is less than 0.1 nm RMS , and the theoretical deviation was verified in the experiments.

Objective Wavefront aberration describes the properties of a small-aberration imaging optical system. In a high-quality microscope objective lens and space telescope, the wavefront errors should be within *λ*/14 RMS (where *λ* is the operational wavelength and RMS is the root mean square value). To meet required wavefront quality of the optical systems for extreme ultraviolet lithography, the error must be less than 0.45 nm RMS. Therefore, wavefront measurements are highly demanded. At present, wavefronts are typically measured by Hartmann sensors, Fizeau interferometers, Twyman-Green interferometers, shearing interferometry, or point-diffraction interferometry. The Shark-Hartmann sensor covers a large measurement range and can quickly measure the wavefront, but with lower resolution than interferometry. The Fizeau and Twyman-Green interferometers cannot measure to higher accuracy than their standard lenses, and cannot be installed in systems with limited space.

In the present study, we report a phase-shifting point-diffraction interferometer with several advantages: high optical field uniformity, high measurable numerical aperture, and a quasi-common optical path. The optical signals are transmitted through single-mode fibers that improve the flexibility of the interferometer system. Our results are anticipated to assist wavefront-aberration detection in high-precision photolithographic projection lenses.

Methods We developed a dual-hole point diffraction interferometer (DHPDI) based on a dual-fiber optical path. First, we designed the measuring principle of the interferometer. The interferometer uses a diode-pumped solid-state laser with multi-longitudinal modes. The laser operating wavelength is 532 nm and the coherence length is several centimeters. The two laser beams form a quasi-common optical path interferometer structure. The intensities of the beams are controlled by interference arms connected with adjustable attenuators, one of which is connected to a phase shifter. The two single-mode optical fibers of the object surface output two beams of coherent light. The end faces are imaged by the lens at two pinholes of the object surface mask, and are filtered by the pinholes to form two standard spherical-wavefront illumination imaging systems. One wavefront becomes the measurement wavefront and the other becomes the reference wavefront through the imaging system to be measured. The two beams overlap and produce an interference pattern at a charge-coupled device camera. The wavefront phase map is measured using a phase-shifting method. In the experiments, a DHPDI and a dual-fiber point-diffraction interferometer (DFPDI) were set up to detect the same projection objective lens. The experimental results were analyzed and the measurement results of both interferometers were compared to verify the effectiveness of the DHPDI.

Results and Discussions This paper proposes our DHPDI for measuring wavefront aberrations of imaging systems. Its advantages are high optical field uniformity, a high measurable numerical aperture, a quasi-common optical path, and a phase-shift element besides the imaging optical path of the system (

Conclusions We have demonstrated an advanced DHPDI. With a pinhole diameter of 700 nm, the deviation of the diffracted wavefront from spherical meets the requirements of wave aberration detection in high-precision imaging systems. The optical signals are transmitted through single-mode fibers, enabling a flexible interferometer system. The DHPDI also allows convenient adjustment of the interference contrast and phase-shifting outside the imaging optical path.

We then constructed DHPDI and DFPDI systems for measuring the wavefront aberration of a 5× projection objective lens with a numerical aperture of 0.3. In both modes, the contrast in the interferogram exceeded 65%. Moreover, the intensity uniformity of the interferogram in DHPDI was approximately twice that in DFPDI. Such uniform intensity can improve the accuracy of pupil-edge detection. The relative error of the wavefront distribution of the two detection results is less than 0.1 nm RMS , and the theoretical deviation was verified in the experiments.

- May. 06, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 9 0904002 (2021)
- DOI：10.3788/CJL202148.0904002

Ultrashort Pulse Reconstruction from Non-Square FROG Traces in Different Geometric Schemes

Mao Anjun, and Liu Chengpu

Objective The research on the electric field of ultrashort laser pulse has a wide application prospect. The ultrashort femtosecond laser pulse is used in many scientific and engineering domains, such as ultrashort spectroscopy, quantum coherent modulation, and ultra-intense laser physics. A typical method of measuring ultrashort laser pulse is the frequency-resolved optical gating (FROG). Using FROG, we first split the pulse into two replicas and then apply a time-delay to one of them. Next, we allow them to interact in a nonlinear process to generate a signal, in which we use spectroscopy to measure the spectral intensity. Finally, we tune the time delays to obtain the spectral intensities of a set of nonlinear signals, which constitute a spectrogram named FROG trace. A phase retrieval algorithm is always required to reconstruct the original laser pulse because only the intensities are recorded in FROG trace. As an efficient phase retrieval algorithm, the prominent component general projecting algorithm (PCGPA) is widely used for ultrashort pulse measurement. However, PCGA brings several practical problems for the FROG measuring process because PCGPA requires that the trace be a square one and that its frequency axis and delay axis coordinates are coupled by fast Fourier transform. According to the different applications of the nonlinear process, FROG can also be realized in different geometrical schemes, for example, second harmonic generation (SHG), polarization gate (PG), and cross-phase modulation (XPM). In this study, we aim to (i) build a non-square FROG trace by taking the features of the measuring devices into fully account to realize successful pulse reconstruction and avoid the drawbacks accompanied by PCGPA and (ii) compare the retrieving results in different geometries to select a more efficient and practical one.Methods First, we build the non-square trace by applying the following three transformations on the 256×256 square FROG trace (Fig. 1): (i) low-pass filtering in the frequency axis (F_LPF), (ii) up-sampling in the frequency axis (F_US), and (iii) down-sampling in the delay axis (D_DS). To identify the relevant non-square traces, we also use the three aforementioned transformations. Then, we numerically generate 200 pulses, whose envelopes are the superposition of 2--6 Gaussian functions and phases are composed of chirps, quadratic chirps, and self-phase modulations. We also obtain few-cyclic experimental pulses for comparison. Next, we use the ptychography algorithm, originally developed in coherent diffraction imaging, to retrieve the FROG trace phase and the corresponding pulse field. Ptychography algorithm is accustomed to non-square traces without further adjustment because it uses only one column of a trace in its single iteration. To evaluate the results, we use the intersection angle θ between the original and the reconstructed pulses in multi-dimensional space, which is considered eligible when less than 0.1. Finally, we realize the pulse reconstruction from non-square FROG traces in different geometric schemes (i.e., SHG, PG, and XMP schemes) and compared the results.Results and Discussions For SHG FROG (Table 1), pulse reconstruction can be realized from trace after F-LPF, which should be regarded as super-resolution one because of the absence of high-frequency information, and F_US application can improve the reconstruction. Although the results will be slightly worse for applying D_DS, it is still less than 0.1 when only 14-delay data are left. For PG and XPM FROG traces (Table 2 and Table 3, respectively), the effects of these three transformations on the retrieval result are similar to the SHG FROG trace; however, the minimum amount of delays for successful reconstruction is only 12 and 8, respectively. Ptychography algorithm from non-square FROG traces (Fig. 2 and Fig. 3) can restructure both the numerally generated pulse and the few-cyclic experimental pulse. For different geometries, XPM FROG can be deemed the most practical because it requires the least delays. SHG FROG trace is symmetrical; thus, only half of the delays in a trace are effective, leading to the requirement of more delay steps to make the algorithm be converged. The nonlinear signal in PG FROG is proportional to the product of the original pulse field and the delayed pulse field, so it vanishes and is useless when the time-delay is longer than the pulse duration. The nonlinear signal in XPM geometry is the original pulse under the same conditions because the delayed pulse replica only modulates the original pulse phase. In short, XPM FROG trace contains more effective information than the other two types of trace; therefore, few time delays are needed for efficient pulse reconstruction.Conclusions In this study, we build non-square traces by applying three transformations on the corresponding square one: (i) F_LPF to reduce the requirement of large phase-matching bandwidth, (ii) F_US to utilize the high resolution of spectrometer, and (iii) D_DS to reduce the measuring time. Ptychograpy algorithm can reconstruct simulated and few-cyclic experimental pulses from the non-square traces in SHG, PG, and XPM FROG. After applying F_LPF and F_US for the XPM FROG trace, only eight delays are sufficient to retrieve the pulses successfully. Changing the delay is the most time-consuming step in FROG; hence, reducing delays will benefit FROG to realize the real-time measurement of ultrashort pulses.

Objective The research on the electric field of ultrashort laser pulse has a wide application prospect. The ultrashort femtosecond laser pulse is used in many scientific and engineering domains, such as ultrashort spectroscopy, quantum coherent modulation, and ultra-intense laser physics. A typical method of measuring ultrashort laser pulse is the frequency-resolved optical gating (FROG). Using FROG, we first split the pulse into two replicas and then apply a time-delay to one of them. Next, we allow them to interact in a nonlinear process to generate a signal, in which we use spectroscopy to measure the spectral intensity. Finally, we tune the time delays to obtain the spectral intensities of a set of nonlinear signals, which constitute a spectrogram named FROG trace. A phase retrieval algorithm is always required to reconstruct the original laser pulse because only the intensities are recorded in FROG trace. As an efficient phase retrieval algorithm, the prominent component general projecting algorithm (PCGPA) is widely used for ultrashort pulse measurement. However, PCGA brings several practical problems for the FROG measuring process because PCGPA requires that the trace be a square one and that its frequency axis and delay axis coordinates are coupled by fast Fourier transform. According to the different applications of the nonlinear process, FROG can also be realized in different geometrical schemes, for example, second harmonic generation (SHG), polarization gate (PG), and cross-phase modulation (XPM). In this study, we aim to (i) build a non-square FROG trace by taking the features of the measuring devices into fully account to realize successful pulse reconstruction and avoid the drawbacks accompanied by PCGPA and (ii) compare the retrieving results in different geometries to select a more efficient and practical one.

Methods First, we build the non-square trace by applying the following three transformations on the 256×256 square FROG trace (Fig. 1): (i) low-pass filtering in the frequency axis (F_LPF), (ii) up-sampling in the frequency axis (F_US), and (iii) down-sampling in the delay axis (D_DS). To identify the relevant non-square traces, we also use the three aforementioned transformations. Then, we numerically generate 200 pulses, whose envelopes are the superposition of 2--6 Gaussian functions and phases are composed of chirps, quadratic chirps, and self-phase modulations. We also obtain few-cyclic experimental pulses for comparison. Next, we use the ptychography algorithm, originally developed in coherent diffraction imaging, to retrieve the FROG trace phase and the corresponding pulse field. Ptychography algorithm is accustomed to non-square traces without further adjustment because it uses only one column of a trace in its single iteration. To evaluate the results, we use the intersection angle *θ* between the original and the reconstructed pulses in multi-dimensional space, which is considered eligible when less than 0.1. Finally, we realize the pulse reconstruction from non-square FROG traces in different geometric schemes (i.e., SHG, PG, and XMP schemes) and compared the results.

Results and Discussions For SHG FROG (Table 1), pulse reconstruction can be realized from trace after F-LPF, which should be regarded as super-resolution one because of the absence of high-frequency information, and F_US application can improve the reconstruction. Although the results will be slightly worse for applying D_DS, it is still less than 0.1 when only 14-delay data are left. For PG and XPM FROG traces (Table 2 and Table 3, respectively), the effects of these three transformations on the retrieval result are similar to the SHG FROG trace; however, the minimum amount of delays for successful reconstruction is only 12 and 8, respectively. Ptychography algorithm from non-square FROG traces (Fig. 2 and Fig. 3) can restructure both the numerally generated pulse and the few-cyclic experimental pulse. For different geometries, XPM FROG can be deemed the most practical because it requires the least delays. SHG FROG trace is symmetrical; thus, only half of the delays in a trace are effective, leading to the requirement of more delay steps to make the algorithm be converged. The nonlinear signal in PG FROG is proportional to the product of the original pulse field and the delayed pulse field, so it vanishes and is useless when the time-delay is longer than the pulse duration. The nonlinear signal in XPM geometry is the original pulse under the same conditions because the delayed pulse replica only modulates the original pulse phase. In short, XPM FROG trace contains more effective information than the other two types of trace; therefore, few time delays are needed for efficient pulse reconstruction.

Conclusions In this study, we build non-square traces by applying three transformations on the corresponding square one: (i) F_LPF to reduce the requirement of large phase-matching bandwidth, (ii) F_US to utilize the high resolution of spectrometer, and (iii) D_DS to reduce the measuring time. Ptychograpy algorithm can reconstruct simulated and few-cyclic experimental pulses from the non-square traces in SHG, PG, and XPM FROG. After applying F_LPF and F_US for the XPM FROG trace, only eight delays are sufficient to retrieve the pulses successfully. Changing the delay is the most time-consuming step in FROG; hence, reducing delays will benefit FROG to realize the real-time measurement of ultrashort pulses.

- Apr. 21, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 7 0704004 (2021)
- DOI：10.3788/CJL202148.0704004

Quality Factor Enhancement Technology of Laser Doppler Signal Based on Liquid Lens

Xi Chongbin, Huang Rong, Zhou Jian, and Nie Xiaoming

Objective A laser Doppler velocimeter (LDV) obtains the moving velocities of carriers by gauging the interference signal when the signal light is mixed with a reference. As a novel speed sensor, LDV possesses several advantages: non-contact measurement, no interference with the target, and high speed-measurement accuracy. However, when measuring the velocity of a solid surface, an LDV can scale the speed only within a limited range. When the moving surface is beyond the measurable range of the LDV, the intensity of the scattered light decreases, and the quality factor of the Doppler signal reduces. A Doppler signal is validated by the quality factor Q, which directly determines the working distance and measurable range of the LDV. When the quality factor is below the threshold, the carrier velocity cannot be determined from the Doppler signal. To meet the requirements of the measurable range, the quality factor is traditionally enhanced by two lenses with fixed focal length, which change the position of the waist spot of the outgoing Gaussian beam. However, this method increases the distance between the lenses and excessively expands the LDV volume. Meanwhile, the measurement scope remains limited and non-adaptable to actual situations. To change the measurable range of the LDV, one must either change the distance between the lenses or reform the lens combination. Mechanically transforming the lens distance will increase the volume and the system complexity, largely restricting the operating range of the speedometer. In addition, the lens combination cannot be changed at any time in practical engineering applications. No other reasonable method can expand the measuring range. Herein we present a beam transformation system based on a liquid lens. The waist-spot position of the Gaussian beam is controlled by changing the driving current, enhancing the quality factor above the threshold over a considerable range. Our design greatly improves the working distance and measurable range of the LDV. We hope that our basic strategy and findings will benefit the speed measurement and navigation ability of the carrier.Methods This paper combines a theoretical analysis and simulation with experimental verification. In the theoretical analysis, we first evaluated the feasibility of transforming the LDV's Gaussian beam through a liquid lens. Based on Gaussian optics, the positions and size of the waist spot were simulated under different driving currents of an electrically tunable lens (ETL). We then constructed an LDV with the ETL and changed the position of its waist spot by changing the driving current without increasing the displacement mechanism. Throughout the experiment, we determined the relationship between the quality factor of a single point and the driving current, and the working distance and measuring range of the LDV for different offset lenses.Results and Discussions The presented method improved the working distance and measurable range of the LDV. Owing to the sharp response time of the liquid lens (in order of milliseconds) (Table 1), the driving current can be controlled by a feedback signal, achieving real-time adjustment of the liquid lens (Fig. 5). In the new LDV structure, the maximum quality factor of a single measuring point reaches 3482, 22.9 times that of a traditional speedometer (Fig. 9). When the Foffset=-25.4 mm offset lens was selected, the working distance of the LDV was changed to the maximum extent, with a measuring range of 0.7--3.3 m. The system volume was reduced at the same time (Fig. 10, Table 2).Conclusions This paper proposes a novel LDV scheme based on a liquid lens. Within this design, the waist spot position of the Gaussian beam can move and the working distance of the LDV can be changed simply by controlling the driving current, without increasing the displacement mechanism. Therefore, the quality factor of the Doppler signal is greatly improved. The quality factor of a single measuring point is maximized at 3482, 22.9 times that of a traditional speedometer. The new structure improves the measuring range of the LDV to 0.7--3.3 m, 4.3 times that of the traditional structure (1.2--1.8 m), while reducing the volume of the speed measurement system. These improvements will greatly expand the engineering applications of LDVs.

Objective A laser Doppler velocimeter (LDV) obtains the moving velocities of carriers by gauging the interference signal when the signal light is mixed with a reference. As a novel speed sensor, LDV possesses several advantages: non-contact measurement, no interference with the target, and high speed-measurement accuracy. However, when measuring the velocity of a solid surface, an LDV can scale the speed only within a limited range. When the moving surface is beyond the measurable range of the LDV, the intensity of the scattered light decreases, and the quality factor of the Doppler signal reduces. A Doppler signal is validated by the quality factor Q, which directly determines the working distance and measurable range of the LDV. When the quality factor is below the threshold, the carrier velocity cannot be determined from the Doppler signal. To meet the requirements of the measurable range, the quality factor is traditionally enhanced by two lenses with fixed focal length, which change the position of the waist spot of the outgoing Gaussian beam. However, this method increases the distance between the lenses and excessively expands the LDV volume. Meanwhile, the measurement scope remains limited and non-adaptable to actual situations. To change the measurable range of the LDV, one must either change the distance between the lenses or reform the lens combination. Mechanically transforming the lens distance will increase the volume and the system complexity, largely restricting the operating range of the speedometer. In addition, the lens combination cannot be changed at any time in practical engineering applications. No other reasonable method can expand the measuring range. Herein we present a beam transformation system based on a liquid lens. The waist-spot position of the Gaussian beam is controlled by changing the driving current, enhancing the quality factor above the threshold over a considerable range. Our design greatly improves the working distance and measurable range of the LDV. We hope that our basic strategy and findings will benefit the speed measurement and navigation ability of the carrier.

Methods This paper combines a theoretical analysis and simulation with experimental verification. In the theoretical analysis, we first evaluated the feasibility of transforming the LDV's Gaussian beam through a liquid lens. Based on Gaussian optics, the positions and size of the waist spot were simulated under different driving currents of an electrically tunable lens (ETL). We then constructed an LDV with the ETL and changed the position of its waist spot by changing the driving current without increasing the displacement mechanism. Throughout the experiment, we determined the relationship between the quality factor of a single point and the driving current, and the working distance and measuring range of the LDV for different offset lenses.

Results and Discussions The presented method improved the working distance and measurable range of the LDV. Owing to the sharp response time of the liquid lens (in order of milliseconds) (Table 1), the driving current can be controlled by a feedback signal, achieving real-time adjustment of the liquid lens (Fig. 5). In the new LDV structure, the maximum quality factor of a single measuring point reaches 3482, 22.9 times that of a traditional speedometer (Fig. 9). When the *F*_{offset}=-25.4 mm offset lens was selected, the working distance of the LDV was changed to the maximum extent, with a measuring range of 0.7--3.3 m. The system volume was reduced at the same time (Fig. 10, Table 2).

Conclusions This paper proposes a novel LDV scheme based on a liquid lens. Within this design, the waist spot position of the Gaussian beam can move and the working distance of the LDV can be changed simply by controlling the driving current, without increasing the displacement mechanism. Therefore, the quality factor of the Doppler signal is greatly improved. The quality factor of a single measuring point is maximized at 3482, 22.9 times that of a traditional speedometer. The new structure improves the measuring range of the LDV to 0.7--3.3 m, 4.3 times that of the traditional structure (1.2--1.8 m), while reducing the volume of the speed measurement system. These improvements will greatly expand the engineering applications of LDVs.

- Mar. 31, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 7 0704003 (2021)
- DOI：10.3788/CJL202148.0704003

Single Calibration Technique of Autocorrelator Based on a Flat Crystal

Li Xutong, Ouyang Xiaoping, Zhang Xuejie, Li Zhan, Pan Liangze, Xu Yingming, Yang Lin, Zhu Baoqiang, Zhu Jian, and Zhu Jianqiang

Objective Ultrashort laser pulses have become an important tool for studying the interaction between lasers and matter and have important application values in the fields of biomedicine, high-energy physics research, and communications. The pulse width is an important parameter of the time characteristics of ultrashort laser pulses. For picosecond and femtosecond laser pulses, the pulse width is often measured by an autocorrelator. The time resolution of the autocorrelator must be accurately calibrated before the measurement. For traditional calibration schemes such as the mobile optical path retarder, although their calibration results are accurate, it cannot be calibrated in a single time. On the contrary, the discriminant rate board method can be calibrated in a single time; however, the accuracy of the calibration result is poor, and the accuracy is not high. This study proposes a new method for calibrating the time resolution of the autocorrelator. A flat crystal that can produce a specific time delay is designed, manufactured, and placed in the optical path during calibration. The time resolution can be obtained through a single calibration. Moreover, the measurement results are accurate and reliable.Methods The designed and manufactured flat crystal with a specific time delay generates double pulse with a fixed time interval T (ignoring high-order reflections), when the pulse to be measured passes through the flat crystal. When the double pulses met in the autocorrelation crystal, the generated autocorrelation signal was a three-peak structure, i.e., a weaker secondary peak signal appeared at equal distances on the left and right sides of the primary peak signal. The time interval between the main peak and the secondary peak signal was denoted as T. The time resolution of the autocorrelator could be obtained by calculating the number of pixels between the primary and secondary peaks. The influence of the thickness, refractive index, and angle of the flat crystal on the calibration result was then analyzed. The time resolution and the relative expanded uncertainty of the autocorrelator were calculated. Finally, the autocorrelator was calibrated using two other calibration methods. The measurement results of the time resolution and the relative expanded uncertainty were given. Moreover, the advantages and the disadvantages of the three schemes were compared.Results and Discussions After the flat crystal is placed in the optical path, the placement angle, thickness, and refractive index will affect the calibration accuracy of the time resolution. Figure 4 shows the deviation caused by the placement angle of the flat crystal. The deflection angle deviation only slightly affects the time resolution. A resolution error of 2% requires a deflection angle of 16.6°. The measurement error primarily comes from the deviation in the thickness h of the flat crystal and the reading. Figure 5(a) depicts the autocorrelation signal collected on the CCD. Obvious secondary peak signals can be found at both ends of the main peak signal, which is consistent with the theoretical analysis results. In this experiment, the thickness h=1.02 mm and refractive index n=1.450 of the plane flat crystal were maintained. The time interval between the primary and secondary peaks is 9.86 ps. The time resolution of the autocorrelator is 217.88 fs/pixel. The relatively extended uncertainty is 1.50%. Table 3 shows the results of the time resolution and the expanded uncertainty obtained by the three calibration methods. The results of the time resolution calibration using a flat crystal are accurate and reliable.Conclusions This study proposes a new method for calibrating the time resolution of an ultrashort pulse measuring device based on a flat crystal. By placing a special flat crystal in front of the ultrashort pulse measuring device, double pulse with a time interval T is generated after the pulse to be measured passes through. The generated autocorrelation signal exhibits weaker sub-peaks on both sides of the primary peak. As shown in Fig. 2, the time interval of the sub-peak 2T can be obtained from the refractive index and the thickness of the flat crystal. The pixel value between the sub-peaks can ascertain the time resolution of a single picosecond autocorrelator and calculate the pulse width at the same time. Subsequently, experiments are performed on a femtosecond laser with a pulse width of 180 fs. Figure 5 illustrates the autocorrelation signal collected by the CCD in the experiment, which is consistent with the theoretical prediction. Using this method to calibrate the time resolution of the autocorrelator yields 217.88 fs/pixel. Compared with the calibration result of 214.27 fs/pixel obtained by the moving optical path retarder method, the relative error is only 1.68%. Compared with the discrimination rate board method, the relative expanded uncertainty of the calibration result using this method is 1.50%, which is far better than the discrimination rate board method of 6.96%. A single calibration of the autocorrelator is realized. In conclusion, the calibration result is accurate and reliable.

Objective Ultrashort laser pulses have become an important tool for studying the interaction between lasers and matter and have important application values in the fields of biomedicine, high-energy physics research, and communications. The pulse width is an important parameter of the time characteristics of ultrashort laser pulses. For picosecond and femtosecond laser pulses, the pulse width is often measured by an autocorrelator. The time resolution of the autocorrelator must be accurately calibrated before the measurement. For traditional calibration schemes such as the mobile optical path retarder, although their calibration results are accurate, it cannot be calibrated in a single time. On the contrary, the discriminant rate board method can be calibrated in a single time; however, the accuracy of the calibration result is poor, and the accuracy is not high. This study proposes a new method for calibrating the time resolution of the autocorrelator. A flat crystal that can produce a specific time delay is designed, manufactured, and placed in the optical path during calibration. The time resolution can be obtained through a single calibration. Moreover, the measurement results are accurate and reliable.

Methods The designed and manufactured flat crystal with a specific time delay generates double pulse with a fixed time interval *T* (ignoring high-order reflections), when the pulse to be measured passes through the flat crystal. When the double pulses met in the autocorrelation crystal, the generated autocorrelation signal was a three-peak structure, i.e., a weaker secondary peak signal appeared at equal distances on the left and right sides of the primary peak signal. The time interval between the main peak and the secondary peak signal was denoted as *T*. The time resolution of the autocorrelator could be obtained by calculating the number of pixels between the primary and secondary peaks. The influence of the thickness, refractive index, and angle of the flat crystal on the calibration result was then analyzed. The time resolution and the relative expanded uncertainty of the autocorrelator were calculated. Finally, the autocorrelator was calibrated using two other calibration methods. The measurement results of the time resolution and the relative expanded uncertainty were given. Moreover, the advantages and the disadvantages of the three schemes were compared.

Results and Discussions After the flat crystal is placed in the optical path, the placement angle, thickness, and refractive index will affect the calibration accuracy of the time resolution. Figure 4 shows the deviation caused by the placement angle of the flat crystal. The deflection angle deviation only slightly affects the time resolution. A resolution error of 2% requires a deflection angle of 16.6°. The measurement error primarily comes from the deviation in the thickness *h* of the flat crystal and the reading. Figure 5(a) depicts the autocorrelation signal collected on the CCD. Obvious secondary peak signals can be found at both ends of the main peak signal, which is consistent with the theoretical analysis results. In this experiment, the thickness *h*=1.02 mm and refractive index *n*=1.450 of the plane flat crystal were maintained. The time interval between the primary and secondary peaks is 9.86 ps. The time resolution of the autocorrelator is 217.88 fs/pixel. The relatively extended uncertainty is 1.50%. Table 3 shows the results of the time resolution and the expanded uncertainty obtained by the three calibration methods. The results of the time resolution calibration using a flat crystal are accurate and reliable.

Conclusions This study proposes a new method for calibrating the time resolution of an ultrashort pulse measuring device based on a flat crystal. By placing a special flat crystal in front of the ultrashort pulse measuring device, double pulse with a time interval *T* is generated after the pulse to be measured passes through. The generated autocorrelation signal exhibits weaker sub-peaks on both sides of the primary peak. As shown in Fig. 2, the time interval of the sub-peak 2*T* can be obtained from the refractive index and the thickness of the flat crystal. The pixel value between the sub-peaks can ascertain the time resolution of a single picosecond autocorrelator and calculate the pulse width at the same time. Subsequently, experiments are performed on a femtosecond laser with a pulse width of 180 fs. Figure 5 illustrates the autocorrelation signal collected by the CCD in the experiment, which is consistent with the theoretical prediction. Using this method to calibrate the time resolution of the autocorrelator yields 217.88 fs/pixel. Compared with the calibration result of 214.27 fs/pixel obtained by the moving optical path retarder method, the relative error is only 1.68%. Compared with the discrimination rate board method, the relative expanded uncertainty of the calibration result using this method is 1.50%, which is far better than the discrimination rate board method of 6.96%. A single calibration of the autocorrelator is realized. In conclusion, the calibration result is accurate and reliable.

- Mar. 29, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 7 0704002 (2021)
- DOI：10.3788/CJL202148.0704002

Mach-Zehnder-Based Spatial-Phase-Shift Double-Imaging System with Large Field of View

Sun Fangyuan, Wu Shuangle, Xie Haotian, Yan Peizheng, Zhao Qihan, and Wang Yonghong

Objective Composite materials have the advantages of high specific strength, specific modulus, and fatigue resistance and have been widely used in aerospace, ships, vehicles, and other fields. However, the properties of composites are easily influenced by their internal defects. Shearography has the advantages of full field, high sensitivity, anti-environmental-interference, and no special requirements of material types. Compared with temporal-phase-shift-based shearography, spatial-phase-shift-based shearography has a fast detection speed and is suitable for real-time detection. Spatial carrier frequency introduction is the most commonly used spatial-phase-shift method. However, in the Michelson-based spatial-phase-shift system, the shear amount and spatial phase shift are both controlled by a rotating mirror. To obtain a separated spectral diagram, a large amount of shear is required, but the effective measurement area is reduced, which leads to excessive sensitivity. In the Mach-Zehnder-based spatial-phase-shift double-imaging system proposed by Gao et al., the shear amount and spatial phase shift can be controlled independently. The imaging lens is placed after the Mach-Zehnder shear part, and the detection area of the system is limited by the size of the first beam-splitter prism in the shear part. When the distance between the shearography system and the detected material is fixed, the field of view of the system is usually small and fixed. Because of the small field of view, the detection efficiency is low. To solve this problem, this paper introduces an improved Mach-Zehnder-based spatial-phase-shift double-imaging system. The advantage of the independent adjustment of shear amount and spatial carrier frequency is retained, the field of view is enlarged, and the detection efficiency is improved.Methods The solid state laser with a wavelength of 532 nm is expanded by the beam expander and irradiates on the surface of a rough material. The speckle produced by the rough surface of the material is reflected and imaged by an imaging lens on its focal plane. The focal point of lens 1 coincides with that of the imaging lens, so after passing through lens 1, the light reflected by the rough surface becomes parallel light. After passing beam-splitter 1, the two light beams are reflected by mirror 1 and mirror 2, respectively, where mirror 2 is used to introduce shear. The spatial carrier frequency is introduced in the two beams and is generated after passing through apertures 1 and 2. The two beams converge via lens 2' and lens 2″, and after passing through beam-splitter 2, the two light beams interfere with each other on the CCD target; thus, the speckle pattern is obtained. The spatial carrier frequency is introduced by the dislocation of apertures 1 and 2 in the spatial position. When the beam is passing through the aperture, the spatial carrier frequency is introduced into the beam. In a certain system in which the wavelength of laser and the distance between the aperture and CCD camera are fixed, the spatial carrier frequency is influenced only by the spatial positions of the two apertures. After the Fourier transform of the image obtained by the CCD camera, the spectrum with phase information can be separated. The inverse Fourier transform is applied to the spectrum containing the phase information, and the deformation distribution can be obtained by subtracting the phase information before and after the deformation.Results and Discussions Two specimens were analyzed in this work: a defect-free aluminum plate and a composite plate with internal defects. The Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view uses an imaging lens with a focal length of 35 mm to analyze the two specimens. The first derivative of the out-of-plane deformation distribution is shown in Fig. 6(a) and the internal defect distribution can be seen in Fig. 6(b). The result shown in Fig. 6 proves the suitability of the system for surface deformation and defect detection. The same specimen was analyzed using the traditional and Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view including two imaging lens with different focal lengths, and Fig. 7 shows the contrast experimental results. From Fig. 7, the double-imaging system has a large field of view compared with the traditional double-imaging system. The use of imaging lenses with different focal lengths can change the field of view. When the camera is unchanged, the enlarged field of view causes the image resolution to decrease. In the actual detection process, according to the field of view and the quality requirements, a camera with higher resolution and a larger target surface and matching short-focal-length imaging lens can be used.Conclusions This paper introduces a Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view that can be used to detect deformation and internal defects. The spatial carrier frequency can be adjusted by changing the relative position of the two apertures placed in front of the lens. The advantage of adjusting spatial carrier frequency and shear amount independently is retained. The experimental results reveal that the field of view in double-imaging shearography can be enlarged by changing the focal length of the imaging lens. According to the actual situation, the field of view is adjustable by changing the focal length of the imaging lens, which leads to an improvement in efficiency.

Objective Composite materials have the advantages of high specific strength, specific modulus, and fatigue resistance and have been widely used in aerospace, ships, vehicles, and other fields. However, the properties of composites are easily influenced by their internal defects. Shearography has the advantages of full field, high sensitivity, anti-environmental-interference, and no special requirements of material types. Compared with temporal-phase-shift-based shearography, spatial-phase-shift-based shearography has a fast detection speed and is suitable for real-time detection. Spatial carrier frequency introduction is the most commonly used spatial-phase-shift method. However, in the Michelson-based spatial-phase-shift system, the shear amount and spatial phase shift are both controlled by a rotating mirror. To obtain a separated spectral diagram, a large amount of shear is required, but the effective measurement area is reduced, which leads to excessive sensitivity. In the Mach-Zehnder-based spatial-phase-shift double-imaging system proposed by Gao et al., the shear amount and spatial phase shift can be controlled independently. The imaging lens is placed after the Mach-Zehnder shear part, and the detection area of the system is limited by the size of the first beam-splitter prism in the shear part. When the distance between the shearography system and the detected material is fixed, the field of view of the system is usually small and fixed. Because of the small field of view, the detection efficiency is low. To solve this problem, this paper introduces an improved Mach-Zehnder-based spatial-phase-shift double-imaging system. The advantage of the independent adjustment of shear amount and spatial carrier frequency is retained, the field of view is enlarged, and the detection efficiency is improved.

Methods The solid state laser with a wavelength of 532 nm is expanded by the beam expander and irradiates on the surface of a rough material. The speckle produced by the rough surface of the material is reflected and imaged by an imaging lens on its focal plane. The focal point of lens 1 coincides with that of the imaging lens, so after passing through lens 1, the light reflected by the rough surface becomes parallel light. After passing beam-splitter 1, the two light beams are reflected by mirror 1 and mirror 2, respectively, where mirror 2 is used to introduce shear. The spatial carrier frequency is introduced in the two beams and is generated after passing through apertures 1 and 2. The two beams converge via lens 2' and lens 2″, and after passing through beam-splitter 2, the two light beams interfere with each other on the CCD target; thus, the speckle pattern is obtained. The spatial carrier frequency is introduced by the dislocation of apertures 1 and 2 in the spatial position. When the beam is passing through the aperture, the spatial carrier frequency is introduced into the beam. In a certain system in which the wavelength of laser and the distance between the aperture and CCD camera are fixed, the spatial carrier frequency is influenced only by the spatial positions of the two apertures. After the Fourier transform of the image obtained by the CCD camera, the spectrum with phase information can be separated. The inverse Fourier transform is applied to the spectrum containing the phase information, and the deformation distribution can be obtained by subtracting the phase information before and after the deformation.

Results and Discussions Two specimens were analyzed in this work: a defect-free aluminum plate and a composite plate with internal defects. The Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view uses an imaging lens with a focal length of 35 mm to analyze the two specimens. The first derivative of the out-of-plane deformation distribution is shown in Fig. 6(a) and the internal defect distribution can be seen in Fig. 6(b). The result shown in Fig. 6 proves the suitability of the system for surface deformation and defect detection. The same specimen was analyzed using the traditional and Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view including two imaging lens with different focal lengths, and Fig. 7 shows the contrast experimental results. From Fig. 7, the double-imaging system has a large field of view compared with the traditional double-imaging system. The use of imaging lenses with different focal lengths can change the field of view. When the camera is unchanged, the enlarged field of view causes the image resolution to decrease. In the actual detection process, according to the field of view and the quality requirements, a camera with higher resolution and a larger target surface and matching short-focal-length imaging lens can be used.

Conclusions This paper introduces a Mach-Zehnder-based spatial-phase-shift double-imaging system with a large field of view that can be used to detect deformation and internal defects. The spatial carrier frequency can be adjusted by changing the relative position of the two apertures placed in front of the lens. The advantage of adjusting spatial carrier frequency and shear amount independently is retained. The experimental results reveal that the field of view in double-imaging shearography can be enlarged by changing the focal length of the imaging lens. According to the actual situation, the field of view is adjustable by changing the focal length of the imaging lens, which leads to an improvement in efficiency.

- Mar. 19, 2021
- Chinese Journal of Lasers
- Vol.48 Issue, 7 0704001 (2021)
- DOI：10.3788/CJL202148.0704001