• Laser & Optoelectronics Progress
  • Vol. 59, Issue 18, 1810002 (2022)
Feng Zhao1, Beibei Zhong1、*, and Hanqiang Liu2
Author Affiliations
  • 1School of Communication and Information Engineering & School of Artificial Intelligence, Xi’an University of Posts and Telecommunications, Xi’an 710121, Shaanxi , China
  • 2School of Computer Science, Shaanxi Normal University, Xi’an , Shaanxi 710119, China
  • show less
    DOI: 10.3788/LOP202259.1810002 Cite this Article Set citation alerts
    Feng Zhao, Beibei Zhong, Hanqiang Liu. Multi-Scale Residual U-Net Fundus Blood Vessel Segmentation Based on Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810002 Copy Citation Text show less
    Network structure comparison. (a) U-Net; (b) multi-scale residual U-shaped network based on attention mechanism
    Fig. 1. Network structure comparison. (a) U-Net; (b) multi-scale residual U-shaped network based on attention mechanism
    Improved residual block structure
    Fig. 2. Improved residual block structure
    Multi-scale convolution module
    Fig. 3. Multi-scale convolution module
    Parallel dilated convolution module
    Fig. 4. Parallel dilated convolution module
    Multi-scale attention module
    Fig. 5. Multi-scale attention module
    Hybrid attention module
    Fig. 6. Hybrid attention module
    Image preprocessing. (a) Original image of DRIVE dataset; (b) pre-processed image
    Fig. 7. Image preprocessing. (a) Original image of DRIVE dataset; (b) pre-processed image
    Retinal vessel segmentation results of different algorithms. (a) Original images; (b) ground truth;(c) proposed algorithm; (d) Residual U-Net[12]; (e) Recurrent U-Net[12]; (f) R2U-Net[12]; (g) algorithm in reference [28]
    Fig. 8. Retinal vessel segmentation results of different algorithms. (a) Original images; (b) ground truth;(c) proposed algorithm; (d) Residual U-Net[12]; (e) Recurrent U-Net[12]; (f) R2U-Net[12]; (g) algorithm in reference [28]
    Detail comparison of segmentation results. (a) Original image; (b) details of original images; (c) details of ground truth; (d) details of proposed algorithm; (e) details of Residual U-Net[12]; (f) details of Recurrent U-Net[12]; (g) details of R2U-Net[12]; (h) details of algorithm reference [28]
    Fig. 9. Detail comparison of segmentation results. (a) Original image; (b) details of original images; (c) details of ground truth; (d) details of proposed algorithm; (e) details of Residual U-Net[12]; (f) details of Recurrent U-Net[12]; (g) details of R2U-Net[12]; (h) details of algorithm reference [28]
    Verification of role of a single module. (a) Original images; (b) Ground truth; (c) M1; (d) M2; (e) M3; (f) M4
    Fig. 10. Verification of role of a single module. (a) Original images; (b) Ground truth; (c) M1; (d) M2; (e) M3; (f) M4
    IndexFormula
    SENRSEN=NTPNTP+NFN
    SPERSPE=NTNNTN+NFP
    F1-scoreSF1-score=2NTP2NTP+NFN+NFP
    ACCRACC=NTN+NTPNTP+NFP+NTN+NFN
    Table 1. Formula of evaluation index
    MethodSENSPEF1ACCAUC
    M10.77360.98220.79010.96400.9754
    M20.76380.98910.81410.96940.9849
    M30.78300.98700.81580.96900.9845
    M40.77570.98730.81140.96840.9829
    Table 2. Verification experiment of role of a single module
    MethodSENSPEF1ACCAUC
    N10.77360.98220.79010.96400.9754
    N20.76380.98910.81410.96940.9849
    N30.80590.98610.82650.97030.9865
    N40.81880.98630.82850.97040.9869
    N50.80170.98680.82680.97060.9872
    Proposed method0.82670.98510.83080.97070.9876
    Table 3. Multi-module cumulative effect verification experiment
    ModelSENSPEF1ACCAUC
    Spatial-channel attention0.82450.98440.82990.97040.9871
    Channel-spatial attention0.82670.98510.83080.97070.9876
    Table 4. Validation experiment of hybrid attention module
    TypeMethodYearSENSPEF1ACCAUC
    Unsupervised methodReference [620100.71200.97240.9382
    Reference [520140.62800.98400.9380
    Reference [720190.70300.98500.9510

    Supervised

    method

    Residual U-Net1220180.77260.98200.81490.95530.9779
    Recurrent U-Net1220180.77510.98160.81550.95560.9782
    R2U-Net1220180.77920.98130.81710.95560.9784
    Reference [2820180.77300.98230.81480.96760.9725
    Reference [1320180.78440.98190.95670.9807
    Reference [1420190.80380.98020.95780.9821
    Reference [1520190.81000.98480.96920.9856
    Reference [1620200.80620.97690.95470.9739
    Reference [3220200.76510.98180.95470.9750
    Proposed method20210.82670.98510.83080.97070.9876
    Table 5. DRIVE dataset fundus blood vessel segmentation results
    TypeMethodYearSENSPEF1ACCAUC

    Unsupervised

    method

    Reference[3320150.72010.98240.95300.9532
    Reference[3420180.75550.98070.9521
    Supervised methodResidual U-Net1220180.77260.98200.78000.95530.9779
    Recurrent U-Net1220180.74590.98360.78100.96220.9803
    R2U-Net1220180.77560.98200.79280.96340.9815
    Reference[2820180.78200.98500.80120.96800.9819
    Reference[1320180.75380.98470.96370.9825
    Reference[1420190.81320.98140.96610.9860
    Reference[1520190.81860.98480.97430.9863
    Reference[1620200.81350.97620.96170.9782
    Reference[3520200.84770.98250.86520.96430.9448
    Proposed method20210.85200.98500.82010.97650.9911
    Table 6. CHASE DB1 dataset fundus blood vessel segmentation results
    Feng Zhao, Beibei Zhong, Hanqiang Liu. Multi-Scale Residual U-Net Fundus Blood Vessel Segmentation Based on Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1810002
    Download Citation