• Acta Photonica Sinica
  • Vol. 51, Issue 3, 0310001 (2022)
Hong HUANG1、*, Tao WANG1, Yuan LI1, Fanlin ZHOU2, and Yu LI2
Author Affiliations
  • 1Key Laboratory of Optoelectronic Technique System of the Ministry of Education,Chongqing University,Chongqing 400044,China
  • 2Department of Pathology,Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital,Chongqing 400030,China
  • show less
    DOI: 10.3788/gzxb20225103.0310001 Cite this Article
    Hong HUANG, Tao WANG, Yuan LI, Fanlin ZHOU, Yu LI. Cancer Pathological Segmentation Network Based on Depth Feature Fusion[J]. Acta Photonica Sinica, 2022, 51(3): 0310001 Copy Citation Text show less

    Abstract

    At present, image segmentation technology is more and more widely used in the auxiliary diagnosis and treatment of cancer. However, mature and advanced image segmentation methods are mainly focused on natural images. Compared with the natural image, the content of pathological image is more complex, and there are great differences between different images. At the same time, the cancer cells and normal cells in the pathological image are mixed with each other, and there are great similarities between the two cells. These characteristics make many excellent natural image segmentation algorithms can not be directly applied to the segmentation task of pathological images and achieve good performance, so that artificial intelligence algorithms can not be quickly applied to medical auxiliary diagnosis and treatment. Therefore, more accurate segmentation of tumor pathological images is of great significance to promote clinical cancer diagnosis.Aiming at the problems of multiple slice staining, large resolution difference and complex image content in medical pathological images, an improved hierarchical feature fusion segmentation method is proposed. The method mainly includes four parts: encoder, decoder, channel attention module and loss function. Firstly, the method selects U-Net network as the basic network structure. Efficientnet-b4 network is used to replace the original encoder of U-Net network for feature extraction, and the features are output from different layers as the output of the original encoder. The efficientnet-b4 network is transferred from natural images to pathological images for transfer learning, which effectively improves the ability of the network to extract effective features. In the decoder part, the feature fusion method is improved, and adds the hierarchical features of all layers before for feature fusion. Therefore, even the shallowest layer still contains the deepest global features. This method is used to gradually increase the role of global features in segmentation prediction from the deepest to the shallowest layer. It weakens the role of detail features in U-Net network. Therefore, the localization ability of the network on the main area of the lesion and the adaptability of the network to different resolution images are enhanced. At the same time, an improved channel attention module is used in each decoding layer, which is more suitable for pathological images than before. By adding global maximum pooling to extract channel features, more feature information is retained, so as to enhance the learning ability of the attention module and more effectively use the attention mechanism to filter the fused features. It highlights the effective features and suppresses the redundant features at the same time. In order to make the deep semantic information more distinguishable, the fusion characteristics of each depth layer are used to predict the output to construct a multi loss function. During training, the model performs prediction output at each decoding layer, and calculates the corresponding loss function for back-propagation training. Through this method, more effective deep semantic features are obtained for lesion location, which enhances the ability of the model to distinguish between normal and cancerous tissues, and improves the ability of the model to obtain and use global semantic features.Experiments are carried out on the BOT dataset and the seed dataset respectively. The Dice coefficient score of this method is 77.99% and 82.94% respectively, and the accuracy score is 88.52% and 87.42% respectively. Compared with U-Net and deeplabv3+, this method can effectively improve the segmentation accuracy and accuracy of tumor focus tissue, realize more accurate tumor location and segmentation in tumor pathological images, provide auxiliary support for doctors' clinical diagnosis more effectively, and improve the efficiency and accuracy of diagnosis and treatment. At the same time, ablation experiments are carried out on two data sets for the main improvements. The experimental results show the effectiveness of the improved method and can jointly promote the segmentation efficiency of HU-Net in pathological images from different aspects.
    Hong HUANG, Tao WANG, Yuan LI, Fanlin ZHOU, Yu LI. Cancer Pathological Segmentation Network Based on Depth Feature Fusion[J]. Acta Photonica Sinica, 2022, 51(3): 0310001
    Download Citation