• Chinese Journal of Lasers
  • Vol. 50, Issue 8, 0802105 (2023)
Bing Chen1, Sheng He1, Jian Liu1、*, Shengfeng Chen1、**, and Enhui Lu2
Author Affiliations
  • 1State Key Laboratory of Advanced Design and Manufacture for Vehicle Body, Hunan University, Changsha 410082,Hunan , China
  • 2School of Mechanical Engineering, Yangzhou University, Yangzhou 225009, Jiangsu , China
  • show less
    DOI: 10.3788/CJL221398 Cite this Article Set citation alerts
    Bing Chen, Sheng He, Jian Liu, Shengfeng Chen, Enhui Lu. Weld Structured Light Image Segmentation Based on Lightweight DeepLab v3+ Network[J]. Chinese Journal of Lasers, 2023, 50(8): 0802105 Copy Citation Text show less

    Abstract

    Objective

    Seam tracking technology based on laser structured light vision sensing, which transforms weld positioning into the positioning of structured light stripe feature points, has strong universality and robustness. It is regarded as the most promising seam tracking solution for engineering implementation. However, arc light, splashes, and fumes in real-time seam tracking can severely contaminate the structured light image, which affects the accuracy and robustness of weld positioning. In addition, the welding site typically provides limited computing power, and the real-time performance of weld positioning directly affects welding efficiency and quality. Accurate and efficient filtering of noise in structured light images can effectively improve the accuracy and efficiency of weld feature positioning, which is valuable in improving welding quality. This study proposes a structured light image segmentation method based on a lightweight DeepLab v3+ semantic segmentation network. It implements noise filtering of arc light, spatters, and fumes, to achieve accurate and efficient noise filtering of weld structured light images by segmenting laser structured light stripes.

    Methods

    A method for weld structured light image segmentation based on a lightweight DeepLab v3+ semantic segmentation network was proposed in this study to filter the noise in the weld structured light image. First, the image characteristics of the structured light of the weld in the dataset were analyzed. Most positions of structured light stripes in the structured light images can be easily distinguished from the image background, except for the region with significant aliasing between structured light stripes and noise. Using a shallow network can also result in significant expressiveness for this problem. Therefore, the Resnet-18 network was adopted to replace the original backbone network. This improved the inference speed of the DeepLab v3+ semantic segmentation network. Second, the ratio of pixels occupied by structured light stripes to the image background in the weld structured light image dataset of this study was significantly imbalanced. The original loss function would have made the model predict more pixels of structured light stripes as the image background, a result unconducive to weld feature point positioning. In this study, a weighted cross-entropy loss function was designed with the complement of pixel occupancy as the weight to improve the segmentation accuracy of structured light stripes.

    Results and Discussions

    The segmentation results of the structured light images show that the proposed method can precisely segment the structured light stripe under noise interference from sources such as arc light and spatter (Fig. 7). The backbone network test and comparison results show that the segmentation performance and efficiency of the DeepLab v3+ semantic segmentation network improve when the Resnet-18 network is used as the backbone network (Table 2). The loss function test and comparison results show that the weighted cross-entropy loss function significantly improves the pixel accuracy AL of structured light stripe segmentation of the model. The model has the highest average score when the ratio of structured light stripe loss-gain coefficient α1 to image background loss-gain coefficient α0 is 1/15. This result indicates that the model achieves a balance between false and missed detection of structured light stripes and exhibits optimal overall performance (Table 3). Finally, the proposed method has an average single-image inference time of 15.9 ms, a pixel accuracy of 96.47% for structured light stripes, and an average intersection-over-union of 89.04% for structured light stripes, which is superior to the comparison methods in terms of segmentation performance and efficiency (Table 4). In the region with severe aliasing of structured light stripes and noise, the proposed method outperforms the comparison methods in segmenting the boundary of structured light stripes, demonstrating the effectiveness and superiority of the proposed lightweight DeepLab v3+ semantic segmentation network in the segmentation of weld structured light images (Fig. 8).

    Conclusions

    This study proposes a weld structured light image segmentation method based on a lightweight DeepLab v3+ semantic segmentation network to filter out noise, from sources such as arc light, splash, and soot, in the weld structured light image and improve the accuracy and robustness of weld tracking. The proposed network achieves noise filtering by segmenting the foreground image of the weld structured light image. First, a shallow Resnet-18 network is used to replace the original backbone network to improve the efficiency of the DeepLab v3+ semantic segmentation network. Next, a weighted cross-entropy loss function is designed with the complement of pixel occupancy as the weight to improve the pixel accuracy of the DeepLab v3+ semantic segmentation network for structured light stripe segmentation and reduce the missed detection rate of structured light stripes. The experimental results show that: (1) by using the shallow Resnet-18 network instead of the original backbone network, the DeepLab v3+ semantic segmentation network can improve the inference speed in structured light image segmentation without degrading the segmentation performance; (2) the improved weighted cross-entropy loss function can effectively improve the pixel accuracy and intersection-over-union ratio of the structured light streak segmentation model; (3) the proposed lightweight DeepLab v3+ semantic segmentation network exhibits better segmentation performance and higher efficiency in weld structured light image segmentation than the classical semantic segmentation model, indicating the effectiveness and superiority of the proposed method in weld structured light image segmentation.

    Bing Chen, Sheng He, Jian Liu, Shengfeng Chen, Enhui Lu. Weld Structured Light Image Segmentation Based on Lightweight DeepLab v3+ Network[J]. Chinese Journal of Lasers, 2023, 50(8): 0802105
    Download Citation