Abstract
1 Introduction
Defects on the surface of optics are among the earliest indications of degradation which are critical for the maintenance of optical systems. Early detection of the defects allows preventive measures to be taken to prevent the defects from growing to an unrepairable size. Large laser facilities, such as the National Ignition Facility (NIF)[
Various image processing techniques, such as the threshold method, Otsu’s method and Fourier transform[
Machine-learning-based models outperform the image processing techniques in accuracy and robustness, and have been successfully applied in computer vision tasks such as object detection and classification. LLNL extracted various features from each damage site and employed ensemble of decision trees to identify false damage sites from hardware reflections[
Sign up for High Power Laser Science and Engineering TOC. Get the latest issue of High Power Laser Science and Engineering delivered right to you!Sign up now
In recent years, fully convolutional networks with U-shaped architecture (U-Net) have been lauded for precise segmentation and efficient use of available samples. Models based on U-Net are commonly used in image analysis tasks in medical diagnosis, biological science and cosmology[
The paper is outlined as follows. First, we introduce the structure of the detection model based on U-Net. Then, we explain in detail about the methodology used in building the model, including the overall architecture, the preparation of the training set, the specifics of its implementation and training procedure. Finally, we show the robustness and adaptability of the model for online detection on the laser facility using novel optical images never seen by the network.
2 U-Net for defect detection
Convolutional neural networks serve as the network backbones for image segmentation due to the high representation power and filter sharing properties. The U-Net architecture is built upon the fully convolutional networks. It consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The schematic representation of our model used for defect detection is shown in Figure
The network consists of four major operations, convolution, up-convolution, max pooling and feature forwarding, as shown by the arrows in Figure
3 Methodology
3.1 Overall architecture
Figure
3.2 Training set preparation
To train the U-Net to detect defects from online images, we first prepared a dataset of training samples, consisting of pairs of regions from online images and corresponding masks created by the offline images. The online images of the optics were acquired using the camera system placed at the center of the target chamber between the laser shots. Defects on the optics scatter light into the CCD yielding bright signals against dark backgrounds. However, a potential defect site on the online image can fall into one of several categories, such as real laser-induced defect, hardware reflection, light spot, reflection from the exit surface or damaged CCD pixels. Figure
Most of the optics on site are under daily maintenance. It will take long to accumulate sufficient number of defects for training the network. We selected two badly damaged optics after high exposure to the laser, which contained several hundreds of real laser-induced defects. After the online images were taken, the optics were disassembled from the frame and passed through a cleaning system. Then the offline images were collected by scanning the cleaned optic in a non-disturbing light environment. The offline images only contained laser-induced defects, without reflections, light spots and other on-site noises; hence, it can be used as the mask of real laser-induced defects for the online image. Figure
To determine the mapping between the online and offline images, a frame of reference was established by applying fiducials making up groups of small dots at the four corners of each optic. The circle Hough transform (CHT)[
The resolution of the online and transformed offline images was around
The LASNR algorithm was applied to mark the position of defects on the offline images and find the full extent of each defect. All the marked sites on the offline image could be considered as real defects; hence, 0–1 mask was given for each pixel, with 1 for real defect and 0 for background. Figure
3.3 Implementation and training
Our implementation was realized in Python 3.6, using the Keras[
The total number of paired images for training the network was 550 (with
For our task, the available training sample was quite small. Data augmentation was essential to teach the network the desired invariance. Morphology transformations like rotation, shift in width and height, horizontal and vertical flips, and variation in gray values were applied to images and masks at the same time. The data samples and parameters for augmentation were wrapped in a data generator, which generated batches of tensor image data for each training epoch.
The intensity distribution of the images was highly imbalanced, as shown in Figure
We trained the network using the Adam optimizer. The Adam optimization is an extension to stochastic gradient descent that can be used to update the network weights. The initial learning rate was
4 Results
To test the robustness and adaptability of the model for online detection, we took images of optics from different beamlines and prepared the testing set following the same method used to produce the training set. The online images were cropped into small regions of
To further characterize the performance of the trained U-Net model, we calculated the precision
5 Conclusion
In this paper, a vision-based approach for detecting optical defects has been proposed based on image segmentation. The proposed deep learning system can accurately locate laser-induced defects on the optics in real time. Unlike typical classification models where the output to an image is a single label, the U-Net model is able to assign a class label to each pixel. Moreover, the detection model can be trained end to end on small samples without the requirement for manual labeling or manual feature extraction. The proposed method is especially strong at detecting defects when each sample may contain multiple adjacent objects. In our case, the model removes the fake defects from reflections by learning the relative spatial and intensity information, where we had limited success with typical classification models in previous studies. The proposed approach may have wide applications in the online detection and maintenance of large laser facilities where a large number of labeled samples are not available.
Nevertheless, we encountered some limitations of the current method. First, it is assumed that the object inspected does not have complicated structures. Hence, the offline images can be used as the mask of real defects. Second, the network’s predictive ability relies on the quality of the imaging system. In our study, the detailed information of the defects was lost due to exposure. Third, the method did not make use of the successive online images taken per week in discriminating the tiny defects from backgrounds. Tracking and predicting the growth of each defect in successive online images will be an important topic for future research.
References
[1] M. L. Spaeth, K. R. Manes, D. H. Kalantar, P. E. Miller, J. E. Heebner, E. S. Bliss, D. R. Speck, T. G. Parham, P. K. Whitman, P. J. Wegner, P. A. Baisden, J. A. Menapace, M. W. Bowers, S. J. Cohen, T. I. Suratwala, J. M. Di Nicola, M. A. Newton, J. J. Adams, J. B. Trenholme, R. G. Finucane, R. E. Bonanno, D. C. Rardin, P. A. Arnold, S. N. Dixit, G. V. Erbert, A. C. Erlandson, J. E. Fair, E. Feigenbaum, W. H. Gourdin, R. A. Hawley, J. Honig, R. K. House, K. S. Jancaitis, K. N. LaFortune, D. W. Larson, B. J. Le Galloudec, J. D. Lindl, B. J. MacGowan, C. D. Marshall, K. P. McCandless, R. W. McCracken, R. C. Montesanti, E. I. Moses, M. C. Nostrand, J. A. Pryatel, V. S. Roberts, S. B. Rodriguez, A. W. Rowe, R. A. Sacks, J. T. Salmon, M. J. Shaw, S. Sommer, C. J. Stolz, G. L. Tietbohl, C. C. Widmayer, R. Zacharias. Fusion Sci. Technol., 69, 25(2016).
[2] A. Casner, T. Caillaud, S. Darbon, A. Duval, I. Thfouin, J. P. Jadaud, J. P. LeBreton, C. Reverdin, B. Rosse, R. Rosch, N. Blanchot, B. Villette, R. Wrobel, J. L. Miquel. High Energy Dens. Phys., 17, 2(2015).
[3] Z. He, L. Sun. Appl. Opt., 54, 9823(2015).
[4] G.-H. Hu, Q.-H. Wang, G.-H. Zhang. Appl. Opt., 54, 2963(2015).
[5] W. Zhu, L. Chen, Y. Liu, Y. Ma, D. Zheng, Z. Han, J. Li. Appl. Opt., 56, 7435(2017).
[6] F. L. Ravizza, M. C. Nostrand, L. M. Kegelmeyer, R. A. Hawley, M. A. Johnson. Proc. SPIE, 7504, 75041B(2009).
[7] L. M. Kegelmeyer, P. W. Fong, S. M. Glenn, J. A. Liebman. Proc. SPIE, 6696, 66962H(2007).
[8] G. M. Abdulla, L. M. Kegelmeyer, Z. M. Liao, W. Carr. Proc. SPIE, 7842, 78421D(2010).
[9] G. Liu, F. Wei, F. Chen, Z. Peng, J. Tang. Chinese Conference on Pattern Recognition and Computer Vision, 237(2018).
[10] F. Wei, F. Chen, B. Liu, Z. Peng, J. Tang, Q. Zhu, D. Hu, Y. Xiang, N. Liu, Z. Sun, G. Liu. Opt. Eng., 57, 053112(2018).
[11] T. N. Mundhenk, L. M. Kegelmeyer, S. K. Trummer. Proc. SPIE, 10338, 103380H(2017).
[12] O. Ronneberger, P. Fischer, T. Brox. International Conference on Medical Image Computing and Computer-assisted Intervention, 234(2015).
[13] T. Falk, D. Mai, R. Bensch, Ö. Çiçek, A. Abdulkadir, Y. Marrakchi, A. Böhm, J. Deubner, Z. Jäckel, K. Seiwald, A. Dovzhenko, O. Tietz, C. Dal Bosco, S. Walsh, D. Saltukoglu, T. L. Tay, M. Prinz, K. Palme, M. Simons, I. Diester, T. Brox, O. Ronneberger. Nat. Meth., 16, 67(2019).
[14] E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, S. Finkbeiner. Cell, 173, 792(2018).
[15] P. Berger, G. Stein. Mon. Not. R. Astron. Soc., 482, 2861(2019).
[16] X. Dong, C. J. Taylor, T. F. Cootes. European Conference on Computer Vision, 398(2018).
[17] P. V. C. Hough. 2nd International Conference on High-Energy Accelerators and Instrumentation, 554(1959).
[18] R. Szeliski. Computer Vision: Algorithms and Applications(2010).
[19] R. Brunelli. Template Matching Techniques in Computer Vision: Theory and Practice(2009).
[20] OpenCV, (2019).. https://docs.opencv.org/master/de/da9/tutorial_template_matching.html
[21] F. Chollet. (2015).. https://github.com/fchollet/keras
[23] F. Milletari, N. Navab, S. A. Ahmadi. Fourth International Conference on 3D Vision, 565(2016).
Set citation alerts for the article
Please enter your email address