• Chinese Optics Letters
  • Vol. 17, Issue 10, 101402 (2019)
Yangliang Li1, Chao Shen2、*, Li Shao1, and Yujun Zhang1、**
Author Affiliations
  • 1State Key Laboratory of Pulsed Power Laser Technology, National University of Defense Technology, Hefei 230037, China
  • 2Key Laboratory of Environmental Optics & Technology, Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Hefei 230031, China
  • show less
    DOI: 10.3788/COL201917.101402 Cite this Article Set citation alerts
    Yangliang Li, Chao Shen, Li Shao, Yujun Zhang. Dynamic image acquisition and particle recognition of laser-induced exit surface particle ejection in fused silica[J]. Chinese Optics Letters, 2019, 17(10): 101402 Copy Citation Text show less

    Abstract

    Particle ejection is an important process during laser-induced exit surface damage in fused silica. Huge quantities of ejected particles, large ejection velocity, and long ejection duration make this phenomenon difficult to be directly observed. An in situ two-frame shadowgraphy system combined with a digital particle recognition algorithm was employed to capture the transient ejecting images and obtain the particle parameters. The experimental system is based on the principle of polarization splitting and can capture two images at each damage event. By combining multiple similar damage events at different time delays, the timeline of ejecting evolution can be obtained. Particle recognition is achieved by an adaptively regularized kernel-based fuzzy C-means algorithm based on a grey wolf optimizer. This algorithm overcomes the shortcoming of the adaptively regularized kernel-based fuzzy C-means algorithm easily falling into the local optimum and can resist strong image noises, including diffraction pattern, laser speckle, and motion artifact. This system is able to capture particles ejected after 600 ns with a time resolution of 6 ns and spatial resolution better than 5 μm under the particle recognition accuracy of 100%.

    Fused silica, as one of the most common optical elements, is widely used in high-power laser facilities. Studying the laser-induced damage of fused silica[1,2] is helpful to enhance the damage threshold[3,4] and the power of laser facilities. On the other hand, the laser-induced damage process has abundant experimental phenomena and physical mechanisms[5,6], so it is also helpful to deepen the understanding of the basic process of the interaction between the high-power laser pulse and fused silica. Particle ejection[7] is one of the important characteristics of exit surface damage in fused silica. However, it is very difficult to observe the particles directly because of the characteristics of high speed, small size, and large quantity. In recent years, Raman et al. have developed a dual-probe time-resolved shadowgraphic microscopy system[8,9] to realize real-time imaging of ejected particles and obtained the kinetic characteristics of particle ejection[10,11]. Because fused silica is a brittle element, the number of ejected particles is very large. If only by manual estimation, it will not only be time-consuming and laborious, but also lead to the calculation error caused by the artificial judgment of particle position and size. Therefore, it is necessary to develop an automatic dynamic characteristics acquisition algorithm. Particle recognition is the key step to realize the automatic acquisition of dynamic characteristics, that is, to identify the ejected particles in the experimental image.

    In this Letter, based on the idea of pump–probe, the particle ejection phenomenon in the process of laser-induced fused silica exit surface damage is observed by a two-frame shadowgraphy system, and an adaptively regularized kernel-based fuzzy C-means (ARKFCM) particle recognition algorithm based on the grey wolf optimizer (GWO) is proposed. The particle recognition results show that this algorithm outperforms the typical algorithm in accuracy and stability.

    Pump–probe is a method to study transient problems. Its main idea is to observe the phenomenon of multiple similar events at different time points and combine them to get the timeline of event evolution. Pump–probe imaging[12,13] uses short pulses of illuminating light to achieve high time resolution, reducing the requirements of the camera and experimental costs. Its time resolution only depends on the pulse width of the illuminating laser. It can effectively adjust the time resolution to meet different observation needs, but this method cannot achieve continuous shooting.

    The two-frame shadowgraphy system[14] uses two probe beams whose polarizations are orthogonal to each other to illuminate. The imaging end receives images with different polarizations using two cameras, so as to realize imaging observation at two different time points of the same damage event. The specific optical path of the two-frame shadowgraphy system based on electrical delay is shown in Fig. 1. The pump source uses a Nd-doped yttrium aluminum garnet (Nd:YAG) laser (Beamtech, Dawa 300) with a wavelength of 1064 nm and a full width at half-maximum of 7.6 ns. The probe source consists of two Nd:YAG lasers with a wavelength of 532 nm and pulse widths of 6 ns (BeamTech, Dawa 100) and 8 ns (Beamtech, Nimma 600), respectively. The imaging lens is a 2× long-distance microscope objective, and the CCD camera (Daheng Inc., 1628×1236pixels) has a pixel size of 4.4μm×4.4μm. In the pump optical path, the pump light first passes through a half-wave plate and a polarization beam splitting prism for adjusting the energy of the pump laser and then is focused by the casing lens to cause a breakdown on the exit surface of the fused silica target. In the probe optical path, the probe light is respectively generated by two independent Nd:YAG lasers, and the polarization directions of the two probe lights are orthogonal to each other by placing polarizers perpendicular to each other in the optical path. The polarization splitter prism is used to combine the two probe beams, and then the beam is expanded by a telescope optical path to illuminate. The probe light passes through the objective, is separated again by the polarized beam splitting prism behind the objective, and is imaged on different cameras separately, thus realizing two images in one damage event. The digital delay generator DG535 is used to synchronize the two probe lasers and introduce an electrical delay of 300 ns. The delay between the probe and the pump is regulated by DG535, and the accurate delay is recorded using photodiodes (Thorlabs, DET025) and a high-speed oscilloscope (1.5 GHz, 20GS/s). Clock signals of pump lamp and Q-switch are provided by the digital delay generator DG535.

    Two-frame shadowgraphy experimental setup. HWP, half-wave plate; PD, photodiode; BS, beam splitter; EM, energy meter; FL, focal lens; PBS, polarized beam splitter; MO, microscope objective; filter, interference filter; TL, tube lens; P, polaroid; R, reflector; ND, neutral density attenuator.

    Figure 1.Two-frame shadowgraphy experimental setup. HWP, half-wave plate; PD, photodiode; BS, beam splitter; EM, energy meter; FL, focal lens; PBS, polarized beam splitter; MO, microscope objective; filter, interference filter; TL, tube lens; P, polaroid; R, reflector; ND, neutral density attenuator.

    The two-frame shadowgraphy system can capture two images in one damage event, enabling it to have a more accurate estimation of the instantaneous velocity of the particles, thereby better exploring the dynamic characteristics of the particle ejection phenomenon. The instantaneous velocity of the ejected particles can be estimated using the following formula: Vn,est=(Sn,t0+ΔtSn,t0)/Δt,where Sn,t0+Δt and Sn,t0 represent the centroid coordinates of n particles before and after the delay Δt, respectively.

    The particle ejection target area images intercepted in the experimental images are shown in Fig. 2. The particle ejection process can be divided into three stages on the basis of ejected particles characteristics. The images of the target area are complex and variable throughout the ejection process, and its complexity and variability are caused by background noise and particle motion.

    Images of particle ejection target area.

    Figure 2.Images of particle ejection target area.

    Figure 2(a) shows typical characteristics of the first stage particle ejection. It can be seen from the images that the particle size is very small, and the distribution is very dense. In general, the development of particle ejection is very dramatic. Figure 2(a) shows that a large number of new particles appear after the time interval between probe pulses. Figure 2(b) is a typical image pair of the second stage. In the second stage, the appearance of a few large area particles can be observed. Large and small particles are distributed in the image, and the distribution of particles tends to disperse. Figure 2(c) is a representative image pair of the third stage. The number of large area particles increases, and the particle distribution is more disperse.

    The dynamic parameters of the ejection can be obtained by using the position difference of particles between two images with a time interval of 300 ns. The relative positions of particles do not change significantly between the 300 ns delay, so it is feasible to track most particles. In order to achieve the automatic acquisition of the ejected particle dynamics characteristic, it is necessary to complete the particle recognition and matching automatically, that is, to identify the ejected particles in the experimental image and to determine the corresponding relationship between particles in two images. The image is full of image noise caused by diffraction patterns and laser speckles. Meanwhile, motion artifacts are another major cause of image noise. Therefore, particle recognition is a challenging and key step to achieve the automatic acquisition of the dynamics characteristic.

    Image segmentation techniques may enable particle recognition. Image segmentation mainly includes the edge segmentation method, threshold segmentation method, and clustering segmentation method. Typical representatives include Canny edge detection operator[15], Otsu method[16], K-means clustering algorithm[17], and fuzzy C-means clustering (FCM) algorithm[18].

    FCM achieves unsupervised clustering by an iterative method to minimize an objective function that depends on the distance of pixels to clustering centers in the feature domain. The objective functions and constraints in the FCM algorithm are JFCM=i=1cj=1nuijmdij2,dij=xjci,i=1cuij=1,uij[0,1],where JFCM is the objective function, c is the number of classifications of the cluster, n is the number of data in the data set, uij is the membership degree of the sample j belonging to the cluster i, xj is the position of the data set j, ci is the central position of the cluster i, and m is a scalar weighted index to control fuzziness. The disadvantage[19] of the FCM algorithm is that it needs to initialize parameters and is sensitive to initial clustering centers and image noise.

    Obviously, the objective function of the FCM algorithm does not include any local contextual information, so the algorithm is sensitive to image noise. To improve the noise immunity of the FCM algorithm, researchers added a term that includes the grayscale and spatial information of the neighborhood to the objective function[2022]. The ARKFCM algorithm[22] uses an adaptive parameter φj to control the effect of the local neighborhood based on the heterogeneity of local grayscale distribution. The ultimate weight of each pixel is correlated with the average grayscale of the local window: φj={2+ωjx¯j<xj2ωjx¯j>xj0x¯j=xj,where ωj are the weights within the local window. Moreover, ARKFCM replaces the standard Euclidean distance with the Gaussian radial basis kernel function. Its objective function is defined as JARKFCM=i=1cj=1nuijm[1K(xj,ci)]+i=1cj=1nφjuijm[1K(x¯j,ci)],in which K is the kernel function, and x¯j is the average, median, or weighted of the pixels around xj.

    ARKFCM overcomes the defect of FCM in its sensitivity to image noise, but it still has the problems of sensitivity to initial clustering centers and being easy to fall into the local optimum. In this work, we combine the GWO[23] and ARKFCM to propose an ARKFCM algorithm based on GWO (GWO_ARKFCM) to solve this problem. Its basic idea is to search the optimal initial clustering center by using the excellent global optimum searching ability of the GWO and then make use of the local optimum seeking ability and noise immunity of ARKFCM, so that the final clustering results converge to the global optimum and have good noise resistance.

    The GWO is an optimization algorithm inspired by the social hierarchy and hunting behavior of grey wolves in the natural world. The social hierarchy of the grey wolf population is α wolf, β wolf, δ wolf, and ω wolf. The predation behavior of wolves is divided into three steps: tracking, encircling, and attacking prey.

    The mathematical model of encircling prey is as follows: X(t+1)=Xp(t)A·|C·Xp(t)X(t)|,A=2a·r1a,X=2r2,where t indicates the current iteration, A and C are coefficient vectors, Xp is the position vector of the prey, X(t) indicates the position vector of a grey wolf, a is linearly decreased from 2 to 0 over the course of iterations, and r1 and r2 are random vectors in [0, 1].

    The mathematical model of hunting can be expressed as follows: X1=XαA1·|C1·XαX|,X2=XβA2·|C2·XβX|,X3=XδA3·|C3·XδX|,X(t+1)=X1(t)+X2(t)+X3(t)3,where Xα, Xβ, and Xδ, respectively, represent the position vector of α, β, and δ in the current population, and X represents the position vector of the grey wolf.

    The fitness function is a criterion for screening the quality of an individual. The larger the fitness value, the better the individual. To reduce the computational complexity, the fitness function of GWO is set as follows: fitness=1/JFCM=1/(1+i=1cj=1nuijmdij2).

    The specific steps of particle recognition include: (i) using the block-matching and three-dimensional (3D) filtering algorithm[24] to denoise the image to improve image quality; (ii) using the GWO to calculate the optimal initial clustering center Xα; (iii) setting Xα as the initial clustering center of ARKFCM and using ARKFCM to conduct particle recognition. The particle recognition results are shown in Fig. 3, where the white marks are the identified ejected particles. Figure 3(a) is the original image. Figure 3(b) is the particle recognition result of ARKFCM falling into the local optimum, and the result is very terrible. Figure 3(c) is the particle recognition result of GWO_ARKFCM. The results show that GWO can prevent ARKFCM from falling into the local optimum and greatly improve the particle recognition effect.

    Particle recognition results of 8300 ns delayed particle ejection target areas.

    Figure 3.Particle recognition results of 8300 ns delayed particle ejection target areas.

    Figure 4 is a visual example of the particle recognition results. Figure 4(a) is the original image; Fig. 4(b) is the edge detection result of the Canny operator; Figs. 4(c), 4(d), 4(e), and 4(f) are the particle recognition results of the Otsu method, K-means clustering, FCM, and GWO_ARKFCM, respectively. It can be seen from Fig. 4 that the image noise in the original image is strong, and the particle recognition results of each algorithm also have large differences, which are as follows: (i) the Canny operator is interfered with by image noise, and the edge detection result is poor, as shown in the red box marked area in Fig. 4(b); (ii) when the image noise is severe, the Otsu method identifies the image noise as an ejected particle, as shown in the blue box marked area in Fig. 4(c); (iii) particle recognition results of K-means clustering and FCM are better than those of the Canny operator and Otsu method, but there are still cases where image noise is identified as ejected particles, as shown in the green and purple box marked areas in Figs. 4(d) and 4(e).

    Visual example of particle recognition results.

    Figure 4.Visual example of particle recognition results.

    A particle recognition test was performed on 100 experimental images, and the results are shown in Fig. 5. Accuracy is defined as the ratio of the correct to total number of identified particles. The particle recognition result shows that the accuracy of GWO_ARKFCM is stable at 100%, indicating that it performs excellently in accuracy and stability. Figure 6 shows the performance of GWO_ARKFCM at different delays. The results show that the method can keep 100% accuracy after a 600 ns delay. However, the method is time-consuming. In addition, it discriminates ambiguous particles as the background, in other words, its high accuracy is at the expense of the reduced number of detected particles.

    Comparison of particle recognition effects.

    Figure 5.Comparison of particle recognition effects.

    Algorithm performance evaluation at different delays.

    Figure 6.Algorithm performance evaluation at different delays.

    A two-frame shadowgraphy system with a time interval of 300 ns was built to observe laser-induced fused silica exit surface particle ejection phenomenon. Aiming at the characteristics of strong noise in experimental images caused by diffraction pattern, laser speckle, and motion artifact, an ARKFCM algorithm based on GWO is proposed. The algorithm has good noise immunity and can achieve 100% accuracy after a 600 ns delay. The algorithm lays a solid foundation for automatic acquisition of dynamic characteristics.

    References

    [1] Z. Jia, T. Zhang, H. Zhu, Z. Li, Z. Shen, J. Lu, X. Ni. Chin. Opt. Lett., 16, 011404(2018).

    [2] Z. Cao, H. He, G. Hu, Y. Zhao, L. Yang, J. Shao. Chin. Opt. Lett., 17, 051601(2019).

    [3] T. Doualle, L. Gallais, P. Cormont, T. Donval, L. Lamaignere, J. Rullier. J. Appl. Phys., 119, 213106(2016).

    [4] W. Liu, C. Wei, K. Yi, J. Shao. Chin. Opt. Lett., 13, 041407(2015).

    [5] S. G. Demos, R. A. Negres. Proc. SPIE, 7132, 71320Q(2008).

    [6] P. DeMange, R. A. Negres, R. N. Raman, J. D. Colvin, S. G. Demos. Phys. Rev. B, 84, 054118(2011).

    [7] S. G. Demos, R. A. Negres, R. N. Raman, A. M. Rubenchik, M. D. Feit. Laser Photon. Rev., 7, 444(2013).

    [8] R. N. Raman. Opt. Eng., 50, 013602(2011).

    [9] R. N. Raman, R. A. Negres, P. Demange, S. G. Demos. Proc. SPIE, 7581, 75810D(2010).

    [10] R. N. Raman, R. A. Negres, S. G. Demos. Appl. Phys. Lett., 98, 051901(2011).

    [11] S. G. Demos, R. N. Raman, R. A. Negres. Opt. Express, 21, 4875(2013).

    [12] T. E. Matthews, I. R. Piletic, M. A. Selim, M. J. Simpson, W. S. Warren. Sci. Trans. Med., 3, 71ra15(2011).

    [13] D. Young, R. Auyeung, A. Piqué, D. Chrisey, D. D. Dlott. Appl. Phys. Lett., 78, 3169(2001).

    [14] C. Shen, X. A. Cheng, Y. Tian, Z. J. Xu, T. Jiang. Acta Phys. Sin. , 65, 155201(2016).

    [15] J. F. Canny. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8, 679(1986).

    [16] N. Otsu. IEEE Trans. Syst. Man Cybernet., 9, 62(1979).

    [17] J. MacQueen. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, 281(1967).

    [18] J. C. Bezdek. Adv. Appl. Pattern Recog., 22, 203(1981).

    [19] Y. Feng, H. Lu, W. Xie, H. Yin, J. Bai. Wireless Personal Commun., 102, 1421(2018).

    [20] S. Chen, D. Zhang. IEEE Trans. Syst. Man Cybernet. Part B, 34, 1907(2004).

    [21] M.-S. Yang, H.-S. Tsai. Pattern Recog. Lett., 29, 1713(2008).

    [22] A. Elazab, C. Wang, F. Jia, J. Wu, G. Li, Q. Hu. Computat. Math. Methods Med., 2015, 485495(2015).

    [23] S. Mirjalili, S. M. Mirjalili, A. Lewis. Adv. Eng. Software, 69, 46(2014).

    [24] K. Dabov, A. Foi, V. Katkovnik, K. O. Egiazarian. IEEE Trans. Image Process., 16, 2080(2007).

    Yangliang Li, Chao Shen, Li Shao, Yujun Zhang. Dynamic image acquisition and particle recognition of laser-induced exit surface particle ejection in fused silica[J]. Chinese Optics Letters, 2019, 17(10): 101402
    Download Citation