• Acta Optica Sinica
  • Vol. 40, Issue 12, 1210001 (2020)
Wenxuan Xue1, Jianxia Liu1、*, Ran Liu1, and Xiaohui Yuan2
Author Affiliations
  • 1College of Information and Computer, Taiyuan University of Technology, Jinzhong, Shanxi 0 30600, China
  • 2Computer Science Department, University of North Texas, Denton, Texas 76201, United States
  • show less
    DOI: 10.3788/AOS202040.1210001 Cite this Article Set citation alerts
    Wenxuan Xue, Jianxia Liu, Ran Liu, Xiaohui Yuan. AnImproved Method for Retinal Vascular Segmentation in U-Net[J]. Acta Optica Sinica, 2020, 40(12): 1210001 Copy Citation Text show less
    Schematic of R2CU
    Fig. 1. Schematic of R2CU
    Attention module schematic
    Fig. 2. Attention module schematic
    AttR2U-Net network structure diagram
    Fig. 3. AttR2U-Net network structure diagram
    Database partial images. (a) Original image; (b) human manual segmentation figure 1; (c) human manual segmentation figure 2; (d) mask
    Fig. 4. Database partial images. (a) Original image; (b) human manual segmentation figure 1; (c) human manual segmentation figure 2; (d) mask
    Color image sub-channel maps. (a) RGB original image; (b) red channel; (c) green channel; (d) blue channel
    Fig. 5. Color image sub-channel maps. (a) RGB original image; (b) red channel; (c) green channel; (d) blue channel
    Local sample block. (a) Training local sample block; (b) ground truth local sample block
    Fig. 6. Local sample block. (a) Training local sample block; (b) ground truth local sample block
    DRIVE database segmentation results. (a) Original images; (b) ground truth images; (c) segmentation result images
    Fig. 7. DRIVE database segmentation results. (a) Original images; (b) ground truth images; (c) segmentation result images
    STARE database segmentation results. (a) Original images; (b) ground truth images; (c) segmentation result images
    Fig. 8. STARE database segmentation results. (a) Original images; (b) ground truth images; (c) segmentation result images
    Segmentation results. (a) Img255 image; (b) ground truth images; (c) U-Net segmentation result; (d) AttR2U-Net segmentation result; (e) img255 local image; (f) local ground truth images; (g) U-Net local segmentation results; (h) AttR2U-Net local segmentation result
    Fig. 9. Segmentation results. (a) Img255 image; (b) ground truth images; (c) U-Net segmentation result; (d) AttR2U-Net segmentation result; (e) img255 local image; (f) local ground truth images; (g) U-Net local segmentation results; (h) AttR2U-Net local segmentation result
    Assessment indexFormula
    ACpTN+pTPpTP+pFP+pTN+pFN
    SEpTPpTP+pFN
    SPpTNpTN+pFP
    F1-score2pTP2pTP+pFN+pFP
    Table 1. Standard formulas for evaluation parameters
    DatabaseMethodACSESPAUC
    DRIVE2nd Human observer0.94720.77600.9724
    Ours method0.96890.80280.98650.9841
    STARE2nd Human observer0.93490.89680.9384
    Ours method0.97960.82270.99260.9865
    Table 2. AttR2U-Net segmentation performance results
    MethodACSESPF1AUC
    N10.95330.77220.98030.82000.9782
    N20.95630.78290.98150.81970.9776
    N30.95870.78460.98130.82400.9819
    N40.96890.80280.98650.83170.9841
    Table 3. Performance comparison of different algorithms based on U-Net networks
    TypeMethodYearACSESPF1-scoreAUC
    Unsupervised methodLam et al.[23]20100.94720.9614
    Fraz et al.[24]20110.94300.71520.9759
    You et al.[25]20110.94340.74100.9751
    Azzopardi et al.[26]20150.94420.76550.97040.9614
    Supervised methodMarín et al.[27]20110.94520.70670.98010.9558
    Fraz et al.[11]20120.94800.74060.98070.9747
    Roychowdhury et al.[28]20160.95200.72500.98300.9620
    Liskowski et al.[29]20160.94950.77630.97680.9720
    Li et al.[30]20160.95270.75690.98160.9738
    U-Net[14]20180.95310.75370.98200.81420.9755
    AttR2U-Net(ours)20190.96890.80280.98650.83170.9841
    Table 4. Performance indicators of different algorithms in the DRIVE database
    TypeMethodYearACSESPF1-scoreAUC
    Unsupervised methodLam et al.[23]20100.95670.9739
    Fraz et al.[24]20110.94420.73110.9680
    You et al.[25]20110.94970.72600.9756
    Azzopardi et al.[26]20150.95630.77160.97010.9497
    Supervised methodMarín et al.[27]20110.95200.69400.97700.9820
    Fraz et al.[11]20120.95340.75480.97630.9768
    Roychowdhury et al.[28]20160.95100.77200.97300.9690
    Liskowski et al.[29]20160.95660.78670.97540.9785
    Li et al.[30]20160.96280.77260.98440.9879
    U-Net[14]20180.96900.82700.98420.83730.9898
    AttR2U-Net(ours)20190.97960.82270.99260.86040.9865
    Table 5. Performance indicators of different algorithms in STARE database
    Wenxuan Xue, Jianxia Liu, Ran Liu, Xiaohui Yuan. AnImproved Method for Retinal Vascular Segmentation in U-Net[J]. Acta Optica Sinica, 2020, 40(12): 1210001
    Download Citation