Author Affiliations
College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, Chinashow less
Fig. 1. Residual block
Fig. 2. Original U-Net codec module
Fig. 3. U-Net codec module with residual block
Fig. 4. Schematic of attention module
Fig. 5. AttResU-Net and Mini-AttResU-Net network structure diagrams
Fig. 6. Flow chart of retinal vessel segmentation
Fig. 7. Fundus image channel comparison maps. (a) RGB original image; (b) red channel; (c) green channel; (d) blue channel
Fig. 8. Image preprocessing. (a) Original color image; (b) original image after extracting the green channel; (c) image after CLAHE operation; (d) after rotating 90°; (e) after rotating 180°; (f) after rotating 270°; (g) after horizontal flip; (h) after vertical flip
Fig. 9. Segmentation results of proposed method on DRIVE database. (a) Original images; (b) ground truth images; (c) segmentation images
Fig. 10. Segmentation results of proposed method on STARE database. (a) Original images; (b) ground truth images; (c) segmentation images
Fig. 11. Local segmentation maps. (a) Original fundus images; (b) partial color fundus maps; (c) standard partial segmentation maps; (d) ours local segmentation maps
Fig. 12. Segmentation results of different methods on DRIVE database
Fig. 13. Segmentation results of different methods on STARE database
Method | Precision | Recall | F1-Score | Accuracy |
---|
M1 | 0.8475 | 0.8270 | 0.8373 | 0.9690 | M2 | 0.8443 | 0.8341 | 0.8392 | 0.9731 | M3 | 0.8524 | 0.8469 | 0.8497 | 0.9743 | M4 | 0.8563 | 0.8639 | 0.8609 | 0.9787 |
|
Table 1. Performance comparison of different segmentation methods based on U-Net network
Method | Year | Precision | Recall | F1-Score | Accuracy |
---|
U-Net[25] | 2018 | 0.8852 | 0.7537 | 0.8142 | 0.9531 | Residual U-Net[25] | 2018 | 0.8614 | 0.7726 | 0.8149 | 0.9553 | Recurrent U-Net[25] | 2018 | 0.8603 | 0.7751 | 0.8155 | 0.9556 | R2 U-Net[25] | 2018 | 0.8589 | 0.7792 | 0.8171 | 0.9556 | Conditional GAN[20] | 2018 | 0.8143 | 0.8274 | 0.8208 | 0.9608 | LadderNet[21] | 2018 | 0.8593 | 0.7856 | 0.8208 | 0.9561 | DUNet[22] | 2019 | 0.8529 | 0.7963 | 0.8237 | 0.9566 | Dynamic Deep Networks[19] | 2019 | 0.8284 | 0.8235 | 0.8259 | 0.9693 | Ours | 2020 | 0.8331 | 0.8369 | 0.8351 | 0.9698 |
|
Table 2. Performance indicators of different methods in DRIVE database
Method | Year | Precision | Recall | F1-Score | Accuracy |
---|
U-Net[25] | 2018 | 0.8475 | 0.8270 | 0.8373 | 0.9690 | Residual U-Net[25] | 2018 | 0.8581 | 0.8203 | 0.8388 | 0.9700 | Recurrent U-Net[25] | 2018 | 0.8705 | 0.8108 | 0.8396 | 0.9706 | R2 U-Net[25] | 2018 | 0.8659 | 0.8298 | 0.8475 | 0.9712 | Conditional GAN[20] | 2018 | 0.8466 | 0.8538 | 0.8502 | 0.9771 | DUNet[22] | 2019 | 0.8777 | 0.7595 | 0.8143 | 0.9641 | Dynamic Deep Networks[19] | 2019 | 0.8559 | 0.8541 | 0.8549 | 0.9780 | Ours | 2020 | 0.8563 | 0.8639 | 0.8609 | 0.9787 |
|
Table 3. Performance indicators of different methods in STARE database
Method | Platform | Inference time /ms |
---|
DRIVE | STARE |
---|
U-Net[25] | NVIDIA GTX 1080Ti | 18 | 17 | Residual U-Net[25] | NVIDIA GTX 1080Ti | 19 | 17 | R2 U-Net[25] | NVIDIA GTX 1080Ti | 17 | 15 | Ours | NVIDIA GTX 1080Ti | 16 | 14 |
|
Table 4. Comparison of inference time of different methods on two databases