• Laser & Optoelectronics Progress
  • Vol. 59, Issue 18, 1811004 (2022)
Zhenhong Huang1、3, Xuejuan Hu1、3、*, Lingling Chen2、3, Liang Hu1、3, Lu Xu1、3, and Lijin Lian3
Author Affiliations
  • 1Sino-German College of Intelligent Manufacturing, Shenzhen Technology University, Shenzhen 518118, Guangdong , China
  • 2College of Health Science and Enviroment Engineering, Shenzhen Technology University, Shenzhen 518118, Guangdong , China
  • 3Key Laboratory of Advanced Optical Precision Manufacturing Technology of Guangdong Provincial Higher Education Institute, Shenzhen 518118, Guangdong , China
  • show less
    DOI: 10.3788/LOP202259.1811004 Cite this Article Set citation alerts
    Zhenhong Huang, Xuejuan Hu, Lingling Chen, Liang Hu, Lu Xu, Lijin Lian. Dense Cell Recognition and Tracking Based on Mask R-CNN and DeepSort[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1811004 Copy Citation Text show less
    Angularly multiplexed OPT system
    Fig. 1. Angularly multiplexed OPT system
    Reconstructed cell images. (a) XZ direction; (b) YZ direction
    Fig. 2. Reconstructed cell images. (a) XZ direction; (b) YZ direction
    Flow chart of zebrafish cell identification and tracking model
    Fig. 3. Flow chart of zebrafish cell identification and tracking model
    Structure diagram of zebrafish cell recognition and tracking model
    Fig. 4. Structure diagram of zebrafish cell recognition and tracking model
    Three-dimensional cell tracking framework
    Fig. 5. Three-dimensional cell tracking framework
    Three-dimensional tracking algorithm flow chart
    Fig. 6. Three-dimensional tracking algorithm flow chart
    Mask generated by Labelme. (a) Original map of cells; (b) mask map
    Fig. 7. Mask generated by Labelme. (a) Original map of cells; (b) mask map
    Data augmentation (a) Original; (b) distortion; (c) overturn; (d) add noise
    Fig. 8. Data augmentation (a) Original; (b) distortion; (c) overturn; (d) add noise
    Comparison of loss and mAP under different learning rates. (a) Training loss under different learning rates; (b) validation loss under different learning rates; (c) mAP under different learning rates
    Fig. 9. Comparison of loss and mAP under different learning rates. (a) Training loss under different learning rates; (b) validation loss under different learning rates; (c) mAP under different learning rates
    mAP change curves under different improvement losses
    Fig. 10. mAP change curves under different improvement losses
    Loss of 606 training sets when learning rate is 0.001
    Fig. 11. Loss of 606 training sets when learning rate is 0.001
    Local comparison of segmentation results. (a) Original cell image; (b) watershed segmentation based on gradient transformation; (c) morphological segmentation; (d) segmentation based on U-Net; (e) segmentation based on Mask R-CNN; (f) segmentation based on Mask R-CNN++
    Fig. 12. Local comparison of segmentation results. (a) Original cell image; (b) watershed segmentation based on gradient transformation; (c) morphological segmentation; (d) segmentation based on U-Net; (e) segmentation based on Mask R-CNN; (f) segmentation based on Mask R-CNN++
    Effect of segmentation in data augmentation. (a) Distortion; (b) add noise; (c) overturn
    Fig. 13. Effect of segmentation in data augmentation. (a) Distortion; (b) add noise; (c) overturn
    Results of tracking by DeepSort. (a) Part of cells in 20th frame; (b) part of cells in 30th frame
    Fig. 14. Results of tracking by DeepSort. (a) Part of cells in 20th frame; (b) part of cells in 30th frame
    Cell trajectory (a) Three-dimentional trajectory map of cells; (b) position of cell relative to Z axis of first frame changes
    Fig. 15. Cell trajectory (a) Three-dimentional trajectory map of cells; (b) position of cell relative to Z axis of first frame changes
    ParameterMeaningValue
    STEPS_PER_EPOCHThe number of pictures per epoch in training100
    VALIDATION_STEPSThe number of pictures per epoch in validation33
    DATASET_TRAINThe number of training set606
    LEARNING_RATEDetermine whether the objective function can converge0.001-0.01
    WEIGHT_DECAYReduce model over fitting0.0001
    IoU_THRESHOLDSThe thresholds of intersection over union0.5/0.75
    EPOCHNumber of iterative training80
    Table 1. Parameter setting of network
    Learning rateTraining time /sFrequency /(frame·s-1mAP11mAP21mAP60
    0.01262950.3040.89260.66810.9586
    0.001261340.3060.90920.92050.9532
    0.0001264590.3020.67270.81380.8811
    Table 2. Time efficiency and part of the model mAP under different learning rates
    AlgorithmXZ planeYZ plane
    PrecisionRecallPrecisionRecall
    Morphology871.6456.8184.8971.87
    Watershed26-2785.2882.7385.2473.31
    U-Net1597.7668.2997.6167.20
    Mask R-CNN1696.2595.9094.0896.30
    Mask R-CNN++98.9996.7497.8698.07
    Table 3. Precision and recall of cell segmentation by different algorithms
    Zhenhong Huang, Xuejuan Hu, Lingling Chen, Liang Hu, Lu Xu, Lijin Lian. Dense Cell Recognition and Tracking Based on Mask R-CNN and DeepSort[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1811004
    Download Citation