Author Affiliations
College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou 550025, Chinashow less
Fig. 1. Original healthy retinal fundus image and image after edge detection. (a) Original healthy retinal fundus image;(b) image after edge detection
Fig. 2. Original image, and components of B, G, and R channels. (a) Original image;(b) B channel component;(c) G channel component; (d) R channel component
Fig. 3. Original neural network structure, and network structure with Dropout. (a) Original neural network structure; (b) network structure with Dropout
Fig. 4. Transformed dataset image
Fig. 5. Residual module
Fig. 6. Traditional Inception module
Fig. 7. Bottleneck structure of 1×1
Fig. 8. Optimized Inception module
Fig. 9. Inception module with ResNet
Fig. 10. Sigmoid function and ReLU function. (a) Sigmoid function; (b) ReLU function
Fig. 11. Loss and average accuracy curves of training with DetectionNet model. (a) Average accuracy; (b) loss
Grade | Degree ofillness | Number of dataimages | Classificationaccuracy /% |
---|
0 | Healthy | 25810 | 73.48 | 1 | Light | 2443 | 6.95 | 2 | Moderate | 5292 | 15.07 | 3 | Severe | 873 | 2.49 | 4 | Value-added | 708 | 2.02 |
|
Table 1. Classification of fundus images of diabetic retinopathy
Lesiongrade | Recognition result | Accuracy /% |
---|
0 | 1 | 2 | 3 | 4 |
---|
0 | 54 | 0 | 1 | 0 | 5 | 90.00 | 1 | 2 | 57 | 1 | 0 | 0 | 95.00 | 2 | 3 | 1 | 53 | 2 | 1 | 88.33 | 3 | 0 | 5 | 3 | 52 | 0 | 86.67 | 4 | 0 | 0 | 1 | 0 | 59 | 98.33 |
|
Table 2. Recognition results of retinal fundus images of five lesion grades
Network model | Space complexity /MB | Accuracy |
---|
LeNet | 0.72 | 0.42 | AlexNet | 60.00 | 0.62 | CompactNet | 14.16 | 0.69 | DetectionNet | 6.60 | 0.91 |
|
Table 3. Comparison of accuracy of different network models