• Acta Photonica Sinica
  • Vol. 51, Issue 4, 0410006 (2022)
Tao ZHOU1、2, Yali DONG1、*, Shan LIU1, Huiling LU3, Zongjun MA4, Senbao HOU1, and Shi QIU5
Author Affiliations
  • 1School of Computer Science and Technology,North Minzu University,Yinchuan 750021,China
  • 2The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission,North Minzu University,Yinchuan 750021,China
  • 3School of Science,Ningxia Medical University,Yinchuan 750004,China
  • 4Department of Orthopedics,Ningxia Medical University General Hospital,Yinchuan 750004,China
  • 5Xi'an Institute of Optics and Precision Mechanics,Chinese Academy of Sciences,Xi'an 710119,China
  • show less
    DOI: 10.3788/gzxb20225104.0410006 Cite this Article
    Tao ZHOU, Yali DONG, Shan LIU, Huiling LU, Zongjun MA, Senbao HOU, Shi QIU. Cross-modality Multi-encoder Hybrid Attention U-Net for Lung Tumors Images Segmentation[J]. Acta Photonica Sinica, 2022, 51(4): 0410006 Copy Citation Text show less
    MEAU-Net network architecture
    Fig. 1. MEAU-Net network architecture
    Spatial attention mechanism
    Fig. 2. Spatial attention mechanism
    Channel attention mechanism
    Fig. 3. Channel attention mechanism
    Multi-scale feature aggregation block
    Fig. 4. Multi-scale feature aggregation block
    CT,PET/CT and PET image
    Fig. 5. CT,PET/CT and PET image
    CT image three-dimensional gray value
    Fig. 6. CT image three-dimensional gray value
    Network segmentation results of different encoders
    Fig. 7. Network segmentation results of different encoders
    Comparison of segmentation results of different encoders
    Fig. 8. Comparison of segmentation results of different encoders
    CT image three-dimensional gray value
    Fig. 9. CT image three-dimensional gray value
    Network segmentation results of different attention mechanisms
    Fig. 10. Network segmentation results of different attention mechanisms
    Comparison of segmentation results of different attention mechanisms
    Fig. 11. Comparison of segmentation results of different attention mechanisms
    CT image three-dimensional gray value
    Fig. 12. CT image three-dimensional gray value
    Segmentation results of different methods
    Fig. 13. Segmentation results of different methods
    Comparison of segmentation results of different methods
    Fig. 14. Comparison of segmentation results of different methods

    Algorithm 1:Spatial attention mechanism

    Inputs:The input map of the two branches features of PET/CT and CTχil,i=1,2

    OutputSAl1:

    1:h1l=concat(χ1l,χ2l)/* Add the feature maps of PET/CT and CT */

    2:hmean=AvgPool(h1l)/*avg-pooling*/

    3:hmax=MaxPool(h1l)/*max-pooling*/

    4:f=concat(hmean,hmax)/* Concatenate the feature map of avg-pooling and max-pooling */

    5:β=Conv3×3(f)/*3×3 convolution operation */

    6:z=σ(β)/* After sigmoid,the feature map becomes C×H×1*/

    7:SAl=z×h1l+h1l /* The feature map of sigmoid is multiplied with the original feature and then add*/

    End

    Table 0. [in Chinese]

    Algorithm 2:Channel attention mechanism

    Inputs:The input features map of the three branches of PET/CT,CT and PETχil,i=1,2,3

    Output:CAl(F)

    1:χhybridl=χ2lχ3l /*Add the feature maps of PET and CT,χhybirdl*/

    2:χl=χhybridl+χ3l /*Concatenate the feature map of χhybridl and PET/CT*/

    3:hl,gl=AvgPool(χl),MaxPool(χl)/* Avg-pooling and max-pooling on the feature maps respectively */

    4:α=σ(MLP(hl)+MLP(gl))/*Perform MLP operations on hl,gl separately*/

    5:CAl(F)=α×χl+χl

    Table 0. [in Chinese]
    ArchitectureDSC/%Recall/%VOE/%RVD/%
    U-Net895.1694.9992.5392.74
    Y-Net2495.1395.092.5292.7
    MEU-Net95.2095.1392.5992.76
    Table 1. Segmentation results of multi-encoders
    ArchitectureDSC/%Recall/%VOE/%RVD/%
    MEU-Net95.2095.1392.5992.76
    MESAU-Net95.4396.2692.7692.7
    MECAU-Net95.6896.3392.6592.8
    MEAU-Net#96.096.592.792.68
    MEAU-Net96.497.2793.093.06
    Table 2. Segment results of different attention mechanisms
    ArchitectureDSC/%Recall/%VOE/%RVD/%
    SegNet2794.8295.1191.8192.04
    Wnet2894.7395.9892.0892.17
    Attention Unet2995.6996.1792.6492.73
    Ours96.497.2793.093.06
    Table 3. Segmentation results of MEAU-Net and other networks
    Tao ZHOU, Yali DONG, Shan LIU, Huiling LU, Zongjun MA, Senbao HOU, Shi QIU. Cross-modality Multi-encoder Hybrid Attention U-Net for Lung Tumors Images Segmentation[J]. Acta Photonica Sinica, 2022, 51(4): 0410006
    Download Citation