• Chinese Optics Letters
  • Vol. 20, Issue 7, 071101 (2022)
Fengdong Chen1、*, Jingyang Sun1, Qian Wang1, Hongbo Zhu1, Fa Zeng2、**, Yueyue Han1, Cheng Lu1, and Guodong Liu1、***
Author Affiliations
  • 1Instrument Science and Technology, Harbin Institute of Technology, Harbin 150001, China
  • 2Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang 621900, China
  • show less
    DOI: 10.3788/COL202220.071101 Cite this Article Set citation alerts
    Fengdong Chen, Jingyang Sun, Qian Wang, Hongbo Zhu, Fa Zeng, Yueyue Han, Cheng Lu, Guodong Liu. In-situ laser-induced surface damage inspection based on image super-resolution and adaptive segmentation method[J]. Chinese Optics Letters, 2022, 20(7): 071101 Copy Citation Text show less

    Abstract

    In-situ laser-induced surface damage inspection plays a key role in protecting the large aperture optics in an inertial confinement fusion (ICF) high-power laser facility. In order to improve the initial damage detection capabilities, an in-situ inspection method based on image super-resolution and adaptive segmentation method is presented. Through transfer learning and integration of various attention mechanisms, the super-resolution reconstruction of darkfield images with less texture information is effectively realized, and, on the basis of image super-resolution, an adaptive image segmentation method is designed, which effectively adapts to the damage detection problems under conditions of uneven illumination and weak signal. An online experiment was carried out by using edge illumination and the telescope optical imaging system, and the validity of the method was proved by the experimental results.

    1. Introduction

    A promising and controllable way of inertial confinement fusion (ICF) is to focus high-power lasers to compress and heat a fuel capsule (target) positioned at the center of a vacuum target chamber to achieve fusion ignition. In-situ inspection of laser-induced damage (LID) on optics plays an important role for these high-power laser systems[1], because most large aperture optics within the beamline will be exposed to the high-energy laser pulse. Especially, in the final integrated optics module (IOM), the optics have higher risk to be damaged. The highlight of in-situ inspection is that it can detect and track the damage sites online[2] and then give an alarm when the LID sites grow to the point at which the optic should be removed to avoid destroying the optic or damaging downstream optics[3].

    We had developed an in-situ final optics damage inspection (FODI) system[4] which is an optical telescope designed to be inserted to the center of the target chamber after a laser shot (Fig. 1). From this position, it can point to each beamline and acquire images of the IOM optics. These images are analyzed to detect LIDs.

    (a) Sketch-map of the FODI. (b) An optic in IOM and an example of LID inside (196 µm). (c) The FODI image of the optics and the LID corresponding to (b).

    Figure 1.(a) Sketch-map of the FODI. (b) An optic in IOM and an example of LID inside (196 µm). (c) The FODI image of the optics and the LID corresponding to (b).

    The distance between the IOM and the FODI camera is in the range of 3.7–5.1 m. The size of large aperture optics is 430mm×430mm. The resolution of the camera is approximately 124 µm at a working distance of 3.7 m and 136 µm at a working distance of 5.1 m. The CCD image format is 4872×3248 pixels with 12 bits, and the pixel size is 7.4μm×7.4μm.

    The online image is required to be able to detect LIDs larger than 150 µm and ensure that the mean relative error (MRE) is less than 15%. Therefore, directly detecting an LID on the surface of the optic with a diameter of less than 100 µm is difficult.

    A proper illumination of the optics to be imaged is critical for the detection of small LIDs. Edge illumination is employed here for each optic, which relies on light injected at the proper angle into the edge of the optics that subsequently becomes trapped through total internal reflection (TIR) (Fig. 2). An LID on the optic surface disrupts the TIR and causes light to scatter from the optic at the flaw location. A percentage of this scattered light is collected and imaged by the FODI imaging system, which appears bright against a dark background (named darkfield imaging). Edge illumination lights only the optic to be imaged and provides excellent local signal to noise ratio performance. The 150 µm flaw is readily apparent in the darkfield image.

    Sketch-map of the online optics damage inspection by using TIR illumination and remote imaging method. Edge illumination lights are the only optic to be imaged.

    Figure 2.Sketch-map of the online optics damage inspection by using TIR illumination and remote imaging method. Edge illumination lights are the only optic to be imaged.

    An edge illumination system was developed specifically for FODI. Two semiconductor laser light source modules were developed and inlayed into the mounts of optics within the IOM (Fig. 3).

    Semiconductor laser light source (wavelength is 808 nm).

    Figure 3.Semiconductor laser light source (wavelength is 808 nm).

    The size of initiated LID may be less than 50 µm. The edge illumination is uneven, the damage signal in the dark part may be weak, and the imaging resolution is lower than the LID size. For these tiny LIDs, whether they had grown up is the focus of online detection, but these flaws are too small and difficult to detect or accurately segment (Fig. 4).

    Edge illumination is uneven. An initiated LID is difficult to detect or accurately segment to judge whether they have grown up. (a) Online image. (b) Uneven lighting simulation.

    Figure 4.Edge illumination is uneven. An initiated LID is difficult to detect or accurately segment to judge whether they have grown up. (a) Online image. (b) Uneven lighting simulation.

    Aiming at these problems, we propose a method (Fig. 5), which effectively improves the initial LID detection ability through super-resolution reconstruction (SR) and adaptive segmentation method.

    Sketch-map of the SR and adaptive segmentation method.

    Figure 5.Sketch-map of the SR and adaptive segmentation method.

    2. Theoretical Analysis

    2.1. Image super-resolution by multi-attention fusion

    The FODI images are darkfield images with less texture information. Most of the areas in the image are smooth and black, and there are fewer features that can be used to learn for image SR. Therefore, attention mechanisms need to be introduced that can focus the operation of the deep neural network on more important information areas. The choice of a single attention mechanism method can only pay attention to one image level and cannot adapt to the SR requirements, so we propose an image SR improvement method by integrating layers, channel space, planar areas, edges, and corner points[5,6] in five attention modules to achieve SR (Fig. 5).

    The essence of image SR is to learn the regression mapping function from low-resolution (LR) images to high-resolution (HR) images from the features of the LR-HR image pairs, which can achieve an image resolution that exceeds the optical resolution determined by the Rayleigh criterion.

    The SR methods have made great strides because of deep learning techniques. The first algorithm to apply convolutional neural networks (CNNs) to image SR in 2014 was SRCNN[7]. In order to solve the problem of difficult training of deep networks, ResNet[8] was proposed in 2016. In the same year, subpixel convolutional layers were introduced to overcome the shortcomings of interpolation and learn end-to-end up sampling. In order to avoid overfitting, a deeply recursive convolutional network (DRCN)[9] constructed by a recurrent neural network (RNN) was proposed. In 2017, in order to take full advantage of the features of the network model at all levels, the SRDenseNet model[10] introduced SenseNet to single-frame image super-resolution reconstruction, in which each layer of features is fed into all subsequent layers in a dense block, concatenating all layer features instead of directly adding them together like ResNet. In the same year, Zhang et al.[11] argued that images containing a large amount of low-frequency information hindered the characterization ability of CNNs and proposed the deep residual channel attention network (RCAN). In 2019, Zhang et al.[12] argued that by down sampling the construction sample images in HR to lose the detail and accuracy that could be obtained from the original data, they trained the depth network by taking real optical zoom images. Using a context bilateral loss, advanced performance is achieved in 4× and 8× computational zooms.

    In 2020, progress was made in channel attention, with typical approaches including component divide-and-conquer (CDC)[5] and holistic attenuation network (HAN)[6]. The former builds three component-attentive blocks (CABs) associated with flat areas, edges, and corners for three different areas in the image: flat areas, edges, and corners. Each attention block learns the mapping from LR to HR through an intermediate supervision (IS) strategy. The latter adaptively emphasizes hierarchical features by considering the correlation between layers. More information features are captured by learning confidence levels for all locations in each channel. Attention feature maps are generated by capturing joint channels and spatial features.

    In 2021, Chen et al. proposed a local hidden function method called local implicit image function (LIIF)[13] to continuously describe local information of complex images. It is possible to achieve any multi-resolution improvement within 30 times magnification and obtain excellent reconstruction results. LIIF represents the latest level of continuous up sampling and high-quality magnified images. However, the problems of darkfield images are less targeted in this article.

    Based on the above analysis, CDC and HAN integrated methods are used in this paper, which make full use of the information of the boundary area of the damage point in the darkfield image for image SR.

    We conduct transfer learning and integrate the SR images obtained by CDC (CDC_SR) and HAN (HAN_SR) by adaptive weight and using structural similarity (SSIM) for evaluation to improve the quality of the integrated SR image. The integration method is as follows: Y=HAN_SR×α+CDC_SR×β,where α and β are the integration factors.

    The final HR image Y can be obtained by minimizing Eq. (2) via the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm[14] as argminY(Y|{HAN_SR,CDC_SR},SSIM).

    SSIM is defined as SSIM(I,I^)=(2μIμI^+C1)(σII^+C2)(μI2+μI^2+C1)(σI2+σI^2+C2),where I is the input original image, I^ is the output HR image Y, μI is the mean value of image I; σI is variance of image I; σII^ is covariance of the integrated SR image and the ground truth image; C1=(K1L)2, C2=(K2L)2, L is the dynamic range of the pixel value of image I, K1=0.01, and K2=0.03.

    2.2. Adaptive image segmentation method

    The uneven illumination and the variety of damage types (Fig. 6) make the accurate segmentation of LID a challenging job.

    Examples showing only backlit illumination of the 12 classes in the “Damage Morphology” dataset[15].

    Figure 6.Examples showing only backlit illumination of the 12 classes in the “Damage Morphology” dataset[15].

    The local area signal-to-noise ratio (LASNR) method[16] is a meaningful approach that has no theoretical limitations in object size, but some limitations can be imposed by the selection of algorithm parameters for size and signal-to-noise cutoff values.

    Other learnable methods can be found in the challenging document image binarization studies. According to how the thresholding values are computed for image binarization, these methods can be divided into histogram-based methods[17], feature-based methods[18], and learning-based methods[19].

    In order to avoid relying heavily on the training dataset, we designed an adaptive image segmentation algorithm based on the image SR.

    For segmentation of the integrated SR image, for a pixel at position (r,c), a local threshold T(r,c) is calculated within a window of size mask_size × mask_size as follows: T(r,c)=u(r,c){1+k[σ(r,c)R1]},where u(r,c) is the local mean value within the window, and σ(r,c) denotes the corresponding standard deviation. The parameter R is the assumed maximum value of the standard deviation, and k is a parameter that controls how much the threshold value T(r,c) differs from the mean value. If there is high contrast in the neighborhood of a point (r,c), the standard deviation σ(r,c) has a value close to R, which yields a threshold value T(r,c) close to the local mean. If the contrast is low, the local threshold is below the local mean value. Every pixel p(r,c) whose gray value is bigger than the calculated local threshold T(r,c) is selected.

    The parameter mask_size specifies the size of the neighborhood in which the local threshold is calculated. The smaller the window size, the thinner the segmented flaw. Mask_size must be set to a value that is larger than the LID size in the original image. Mask_size needs to be multiplied by the super-resolution factor n in the SR images.

    The parameter k controls how much the threshold value differs from the local mean value. Using a smaller value of k can segment structures with lower contrast to their background. Using larger k values can suppress clutter. Changing k can determine the area of the LID.

    The k value varies from large to small (k1 > k2 > k3), and the segmented area of the local seed area also increases. When an LID is super-imposed on a background feature or noise with elevated intensity, reducing the k can include all connected pixels greater than the proper pixels and falsely label the background feature as part of the LID, as shown in Fig. 7.

    Cross section of the isolated peak and cross section of the peak super-imposed on a background feature, both shown with cross hatching to indicate the total area included when a fixed k is used to define the extent of an LID. For a peak that is not over a background feature, the area of the LID increases smoothly as the k is decreased. However, for a peak that overlays a background feature, the area will sharply increase when the k drops below the level of the background feature.

    Figure 7.Cross section of the isolated peak and cross section of the peak super-imposed on a background feature, both shown with cross hatching to indicate the total area included when a fixed k is used to define the extent of an LID. For a peak that is not over a background feature, the area of the LID increases smoothly as the k is decreased. However, for a peak that overlays a background feature, the area will sharply increase when the k drops below the level of the background feature.

    When the rate of change of k reaches the maximum value, the k is then used as the threshold for the local segmentation.

    3. Experimental Results and Discussion

    3.1. Experiment

    Figure 8 is the diagram of the FODI online experiment configuration, and Fig. 9 is the actual optical system of IOM and the optical imaging lens.

    Diagram of the FODI online experiment device.

    Figure 8.Diagram of the FODI online experiment device.

    Optical system of the IOM and the optical imaging lens.

    Figure 9.Optical system of the IOM and the optical imaging lens.

    The IOM contains nine large aperture optics, as shown in Fig. 9, from left to right: vacuum closed window, fundamental frequency cepstral peak performance (CPP), double frequency crystal, mixing crystal 1, mixing crystal 2, wedge lens (convex optic), big-aperture sampling grating (BSG), triple frequency CPP, and vacuum isolation sheet.

    Since the IOM contains a wedge-shaped lens, the optimized optical design of the optical imaging lens is associated with IOM’s optics. The main objective group is fixed, and the distance from the IOM remains constant. The focus objective group is a zoom lens group, which changes the focal length by changing the distance between the focus objective group and the main objective. By moving the position of the CCD, the image is maintained clearly, and the magnification of the imaging system is basically constant.

    The optical resolution (derived from the modulation transfer function) and pixel resolution (pixel equivalent) are shown in Table 1.

    Distance (m)Optical ElementOptical Resolution (μm)Pixel Resolution (μm/pixel)
    5.1Shielding sheet136.17137.04
    3.7Vacuum window124.63125.42

    Table 1. Optical Resolution and Pixel Resolution of the Optical Lens at Different Locations

    We captured 200 online images and made a dataset. We carried out transfer learning for HAN and CDC SR methods using the dataset by down sampling the online images.

    We randomly selected 12 images from the 100 test images to verify the effectiveness of the present method.

    3.2. Discussion

    The HAN, CDC, and integrated SR images are obtained as shown in Fig. 10.

    SR results. (a) Online original FODI image of wedge lens. (b) An example window of original resolution. (c) The 2× resolution of the HAN_SR (SSIM = 0.92). (d) The 4× resolution of the HAN_SR (SSIM = 0.91). (e) The 2× resolution of the CDC_SR (SSIM = 0.92). (f) The 4× resolution of the CDC_SR (SSIM = 0.91). (g) The 2× resolution of the integrated result (SSIM = 0.95). (h) The 4× resolution of the integrated result (SSIM = 0.94).

    Figure 10.SR results. (a) Online original FODI image of wedge lens. (b) An example window of original resolution. (c) The 2× resolution of the HAN_SR (SSIM = 0.92). (d) The 4× resolution of the HAN_SR (SSIM = 0.91). (e) The 2× resolution of the CDC_SR (SSIM = 0.92). (f) The 4× resolution of the CDC_SR (SSIM = 0.91). (g) The 2× resolution of the integrated result (SSIM = 0.95). (h) The 4× resolution of the integrated result (SSIM = 0.94).

    In this SR experiment, the SSIM value of the integrated SR image was improved by an average of 3%, which proves that the proposed SR integrating method has a lifting effect.

    Figure 11 shows an example of one LID in an integrated SR image whose size is less than 100 µm.

    Example result of SR image of one LID. (a) Online original FODI image of wedge lens. (b) An example of original resolution of an LID inside (94.0 µm). (c) The 2× integrated resolution of the LID. (d) The 4× integrated resolution of the LID.

    Figure 11.Example result of SR image of one LID. (a) Online original FODI image of wedge lens. (b) An example of original resolution of an LID inside (94.0 µm). (c) The 2× integrated resolution of the LID. (d) The 4× integrated resolution of the LID.

    Adaptive LID segmentation is performed using the integrated resolution images to detect LIDs. The experimental results are shown in Fig. 12.

    Adaptive LID segmentation results. (a) Online original image of wedge lens segmentation result. (b) The example window of the original resolution segmentation result (46 LID sites found). (c) The 2× integrated resolution image segmentation result (75 LID sites found). (d) The 4× integrated resolution image segmentation result (90 LID sites found).

    Figure 12.Adaptive LID segmentation results. (a) Online original image of wedge lens segmentation result. (b) The example window of the original resolution segmentation result (46 LID sites found). (c) The 2× integrated resolution image segmentation result (75 LID sites found). (d) The 4× integrated resolution image segmentation result (90 LID sites found).

    In this image segmentation experiment, the number of LIDs was found to be increased 0.6 times and 0.9 times, respectively, in 2× and 4× integrated resolution images, which proves that the proposed adaptive image segmentation algorithm based on the integrating image SR method can improve weak LID detection capabilities. Of course, there may be some false damage points. For further identification methods, please refer to Ref. [4].

    The detection rate results of the randomly selected 12 images showed that our method can evenly detect 93.4%, 96.5%, and 97.7% surface LIDs with diameter more than 50 µm, respectively, in the original, 2×, and 4× integrated resolution images (as shown in Fig. 13). The corresponding standard deviations are 0.93, 1.09, and 0.98, respectively.

    Quartile statistical chart of detection rate of LIDs (>50) in the 12 images of the original, 2×, and 4× integrated resolution.

    Figure 13.Quartile statistical chart of detection rate of LIDs (>50) in the 12 images of the original, 2×, and 4× integrated resolution.

    In addition, the boundary locations are subdivided, which helps to improve the accuracy of boundary positioning.

    4. Conclusion

    An in-situ LID inspection method was presented based on laser TIR darkfield imaging, image SR integrating, and adaptive segmentation. The method can detect the LIDs with few pixels and uneven lighting conditions. The method can improve the initial damage detection capabilities, and the validity was proved by the experimental results.

    References

    [1] D. F. P. Pile. Redlining lasers for nuclear fusion. Nat. Photon., 15, 863(2021).

    [2] A. Conder, J. Chang, L. Kegelmeyer, M. Spaeth, P. Whitman. Final optics damage inspection (FODI) for the National Ignition Facility. Proc. SPIE, 7797, 77970P(2010).

    [3] M. C. Nostrand, C. W. Carr, Z. M. Liao, J. Honig, M. L. Spaeth, K. R. Manes, M. A. Johnson, J. J. Adams, D. A. Cross, R. A. Negres, C. C. Widmayer, W. H. Williams, M. J. Matthews, K. S. Jancaitis, L. M. Kegelmeyer. Tools for Predicting Optical Damage on Inertial Confinement Fusion-Class Laser Systems(2011).

    [4] F. Wei, F. Chen, B. Liu, Z. Peng, J. Tang, Q. Zhu, D. Hu, Y. Xiang, N. Liu, Z. Sun, G. Liu. Automatic classification of true and false laser-induced damage in large aperture optics. Opt. Eng., 57, 053112(2018).

    [5] P. Wei, Z. Xie, H. Lu, Z. Zhan, Q. Ye, W. Zuo, L. Lin. Component divide-and-conquer for real-world image super-resolution(2020).

    [6] B. Niu, W. Wen, W. Ren, X. Zhang, L. Yang, S. Wang, K. Zhang, X. Cao, H. Shen. Single image super-resolution via a holistic attention network(2020).

    [7] C. Dong, C. C. Loy, K. He, X. Tang. Learning a deep convolutional network for image super-resolution. European Conference on Computer Vision, 184(2014).

    [8] K. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770(2016).

    [9] J. Kim, J. K. Lee, K. M. Lee. Deeply-recursive convolutional network for image super-resolution(2016).

    [10] T. Tong, G. Li, X. Liu, Q. Gao. Image super-resolution using dense skip connections. IEEE International Conference on Computer Vision (ICCV), 4809(2017).

    [11] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu. Image super-resolution using very deep residual channel attention networks. European Conference on Computer Vision, 294(2018).

    [12] Y. Chen, S. Liu, X. Wang. Learning continuous image representation with local implicit image function. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8628(2021).

    [13] X. Zhang, Q. Chen, R. Ng, V. Koltun. Zoom to learn, learn to zoom. IEEE Conference on Computer Vision and Pattern Recognition, 3762(2019).

    [14] C. Zhu, R. H. Byrd, P. Lu, J. Nocedal. Algorithm 778: L-BFGS-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw., 23, 550(1997).

    [15] C. Amorin, L. M. Kegelmeyer, W. P. Kegelmeyer. A hybrid deep learning architecture for classification of microscopic damage on National Ignition Facility laser optics. Stat. Anal. Data Min., 12, 505(2019).

    [16] L. Kegelmeyer, P. Fong, S. Glenn, J. Liebman. Local area signal-to-noise ratio (LASNR) algorithm for image segmentation. Proc. SPIE, 6696, 66962H(2007).

    [17] P. Stathis, E. Kavallieratou, N. Papamarkos. An evaluation technique for binarization algorithms. J. Univers. Comput. Sci., 14, 3011(2008).

    [18] I. K. Kim, D.-W. Jung, R.-H. Park. Document image binarization based on topographic analysis using a water flow model. Pattern Recognit., 35, 265(2002).

    [19] Y. Wu, P. Natarajan, S. Rawls, W. AbdAlmageed. Learning document image binarization from data. IEEE International Conference on Image Processing (ICIP), 3763(2016).

    Fengdong Chen, Jingyang Sun, Qian Wang, Hongbo Zhu, Fa Zeng, Yueyue Han, Cheng Lu, Guodong Liu. In-situ laser-induced surface damage inspection based on image super-resolution and adaptive segmentation method[J]. Chinese Optics Letters, 2022, 20(7): 071101
    Download Citation