• Chinese Journal of Lasers
  • Vol. 51, Issue 21, 2107102 (2024)
Tong Wu1,2, Haoji Hu1,2,*, Yang Feng3, Qiong Luo4..., Dong Xu5,6, Weizeng Zheng7, Neng Jin4, Chen Yang5,6 and Jincao Yao5,6|Show fewer author(s)
Author Affiliations
  • 1University of Illinois Urbana-Champaign Institute, Zhejiang University, Hangzhou 314400, Zhejiang , China
  • 2College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, Zhejiang , China
  • 3Angelalign Research Institute, Anglealign Technology Inc., 200433, Shanghai , China
  • 4Department of Obstetrics, Women's Hospital School of Medicine Zhejiang University, Hangzhou 310006, Zhejiang , China
  • 5Zhejiang Cancer Hospital, Hangzhou 331022, Zhejiang , China
  • 6Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou 310000, Zhejiang , China
  • 7Department of Radiology, Women's Hospital School of Medicine Zhejiang University, Hangzhou 310006, Zhejiang , China
  • show less
    DOI: 10.3788/CJL240614 Cite this Article Set citation alerts
    Tong Wu, Haoji Hu, Yang Feng, Qiong Luo, Dong Xu, Weizeng Zheng, Neng Jin, Chen Yang, Jincao Yao. Application of Segment Anything Model in Medical Image Segmentation[J]. Chinese Journal of Lasers, 2024, 51(21): 2107102 Copy Citation Text show less
    References

    [1] Litjens G, Kooi T, Bejnordi B E et al. A survey on deep learning in medical image analysis[J]. Medical Image Analysis, 42, 60-88(2017).

    [2] Ma J, Zhang Y, Gu S et al. AbdomenCT-1K: is abdominal organ segmentation a solved problem?[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 6695-6714(2022).

    [3] Zhang Y C, Jiao R S. Towards segment anything model (SAM) for medical image segmentation: a survey[EB/OL]. http://arxiv.org/abs/2305.03678v3

    [4] He S, Bao R N, Li J P et al. Computer-vision benchmark segment-anything model (SAM) in medical images: accuracy in 12 datasets[EB/OL]. http://arxiv.org/abs/2304.09324v3

    [5] Wang X, Chen G Y, Qian G W et al. Large-scale multi-modal pre-trained models: a comprehensive survey[J]. Machine Intelligence Research, 20, 447-482(2023).

    [6] Zhou C, Li Q, Li C et al. A comprehensive survey on pretrained foundation models: a history from BERT to ChatGPT[EB/OL]. http://arxiv.org/abs/2302.09419v3

    [7] Brown T B, Mann B, Ryder N et al. Language models are few-shot learners[C], 1877-1901(2020).

    [8] Stefanini M, Cornia M, Baraldi L et al. From show to tell: a survey on deep learning-based image captioning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 539-559(2023).

    [9] Ramesh A, Pavlov M, Goh G et al. Zero-shot text-to-image generation[C]. Virtual Event. [S.l.: s.n.], 8821-8831(2021).

    [10] Jia C, Yang Y F, Xia Y et al. Scaling up visual and vision-language representation learning with noisy text supervision[EB/OL]. http://arxiv.org/abs/2102.05918v2

    [11] Kirillov A, Mintun E, Ravi N et al. Segment anything[EB/OL]. http://arxiv.org/abs/2304.02643v1

    [12] Dosovitskiy A, Beyer L, Kolesnikov A et al. An image is worth[EB/OL], 16-16. https://arxiv.org/abs/2010.11929

    [13] Wang D, Zhang J, Du B et al. SAMRS: scaling-up remote sensing segmentation dataset with segment anything model[C]. 16(2023).

    [14] Osco L P, Wu Q S, de Lemos E L et al. The segment anything model (SAM) for remote sensing applications: from zero to one shot[J]. International Journal of Applied Earth Observation and Geoinformation, 124, 103540(2023).

    [15] Ali H, Bulbul M F, Shah Z. Prompt engineering in medical image segmentation: an overview of the paradigm shift[C], 16-17(2023).

    [16] Huang Y H, Yang X, Liu L et al. Segment anything model for medical images?[J]. Medical Image Analysis, 92, 103061(2024).

    [17] Tang L, Xiao H K, Li B. Can SAM segment anything?[EB/OL]. http://arxiv.org/abs/2304.04709v2

    [18] Chen F, Chen L Y, Han H J et al. The ability of segmenting anything model (SAM) to segment ultrasound images[J]. Bioscience Trends, 17, 211-218(2023).

    [19] Xiao W X, Li H F, Zhang Y F et al. Medical image fusion based on multi-scale feature learning and edge enhancement[J]. Laser & Optoelectronics Progress, 59, 0617029(2022).

    [20] Ma J, He Y T, Li F F et al. Segment anything in medical images[J]. Nature Communications, 15, 654(2024).

    [21] Hu M Z, Li Y H, Yang X F. SkinSAM: empowering skin cancer segmentation with segment anything model[EB/OL]. http://arxiv.org/abs/2304.13973v1

    [22] Liu Y H, Zhang J M, She Z C et al. SAMM (segment any medical model):[EB/OL], -3. http://arxiv.org/abs/2304.05622v4

    [23] Gao Y F, Xia W, Hu D D et al. DeSAM: decoupling segment anything model for generalizable medical image segmentation[EB/OL]. http://arxiv.org/abs/2306.00499v1

    [24] Zhang L Y, Deng X K, Lu Y. Segment anything model (SAM) for medical image segmentation: a preliminary review[C]. Turkiye, 4187-4194(2023).

    [25] Zhang Y C, Shen Z R, Jiao R S. Segment anything model for medical image segmentation: current applications and future directions[EB/OL]. http://arxiv.org/abs/2401.03495v1

    [26] Radford A, Kim J W, Hallacy C et al. Learning transferable visual models from natural language supervision[EB/OL]. http://arxiv.org/abs/2103.00020v1

    [27] Deng R N, Cui C, Liu Q et al. Segment anything model (SAM) for digital pathology: assess zero-shot segmentation on whole slide imaging[EB/OL]. http://arxiv.org/abs/2304.04155v1

    [28] The Cancer Genome Atlas Research Network. Comprehensive genomic characterization defines human glioblastoma genes and core pathways[J]. Nature, 455, 1061-1068(2008).

    [29] Barisoni L, Nast C C, Jennette J C et al. Digital pathology evaluation in the multicenter Nephrotic Syndrome Study Network (NEPTUNE)[J]. Clinical Journal of the American Society of Nephrology: CJASN, 8, 1449-1459(2013).

    [30] Kumar N, Verma R, Sharma S et al. A dataset and a technique for generalized nuclear segmentation for computational pathology[J]. IEEE Transactions on Medical Imaging, 36, 1550-1560(2017).

    [31] Hu C F, Xia T Y, Ju S H et al. When SAM meets medical images: an investigation of segment anything model (SAM) on multi-phase liver tumor segmentation[EB/OL]. http://arxiv.org/abs/2304.08506v6

    [32] Zhang L, Liu Z L, Zhang L et al. Segment anything model (SAM) for radiation oncology[EB/OL]. http://arxiv.org/abs/2306.11730v2

    [33] Roy S, Wald T, Koehler G et al. SAM[EB/OL]. http://arxiv.org/abs/2304.05396v1

    [34] Ji Y F, Bai H T, Yang J et al. AMOS: a large-scale abdominal multi-organ benchmark for versatile medical image segmentation[EB/OL]. https://arxiv.org/abs/2206.08023

    [35] Putz F, Grigo J, Weissmann T et al. The segment anything foundation model achieves favorable brain tumor autosegmentation accuracy on MRI to support radiotherapy treatment planning[EB/OL]. http://arxiv.org/abs/2304.07875v1

    [36] Menze B H, Jakab A, Bauer S et al. The multimodal brain tumor image segmentation benchmark (BRATS)[J]. IEEE Transactions on Medical Imaging, 34, 1993-2024(2015).

    [37] Zhang P, Wang Y P. Segment anything model for brain tumor segmentation[EB/OL]. https://arxiv.org/abs/2309.08434

    [38] Mohapatra S, Gosai A, Schlaug G. Sam vs bet: a comparative study for brain extraction and segmentation of magnetic resonance images using deep learning[EB/OL]. http://arxiv.org/abs/2304.04738v3

    [39] Liew S L, Anglin J M, Banks N W et al. A large, open source dataset of stroke anatomical brain images and manual lesion segmentations[J]. Scientific Data, 5, 180011(2018).

    [40] Kuijf H J, Biesbroek J M, de Bresser J et al. Standardized assessment of automatic segmentation of white matter hyperintensities and results of the WMH segmentation challenge[J]. IEEE Transactions on Medical Imaging, 38, 2556-2568(2019).

    [41] Hu M Z, Li Y H, Yang X F. BreastSAM: a study of segment anything model for breast tumor detection in ultrasound images[EB/OL]. http://arxiv.org/abs/2305.12447v1

    [42] Al-Dhabyani W, Gomaa M, Khaled H et al. Dataset of breast ultrasound images[J]. Data in Brief, 28, 104863(2020).

    [43] Zhou T, Zhang Y Z, Zhou Y et al. Can SAM segment polyps?[EB/OL]. http://arxiv.org/abs/2304.07583v1

    [44] Jha D, Smedsrud P H, Riegler M A et al. Kvasir-SEG: a segmented polyp dataset[M]. Multimedia modeling, 11962, 451-462(2020).

    [45] Bernal J, Sánchez F J, Fernández-Esparrach G et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians[J]. Computerized Medical Imaging and Graphics, 43, 99-111(2015).

    [46] Tajbakhsh N, Gurudu S R, Liang J M. Automated polyp detection in colonoscopy videos using shape and context information[J]. IEEE Transactions on Medical Imaging, 35, 630-644(2016).

    [47] Silva J, Histace A, Romain O et al. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer[J]. International Journal of Computer Assisted Radiology and Surgery, 9, 283-293(2014).

    [48] Vázquez D, Bernal J, Sánchez F J et al. A benchmark for endoluminal scene segmentation of colonoscopy images[J]. Journal of Healthcare Engineering, 2017, 4037190(2017).

    [49] Wang A, Islam M, Xu M Y et al. SAM meets robotic surgery: an empirical study on generalization, robustness and adaptation[M]. Medical image computing and computer assisted intervention-MICCAI 2023 workshops, 14393, 234-244(2023).

    [50] Allan M, Shvets A, Kurmann T et al. 2017 robotic instrument segmentation challenge[EB/OL]. http://arxiv.org/abs/1902.06426v2

    [51] Allan M, Kondo S, Bodenstedt S et al. 2018 robotic scene segmentation challenge[EB/OL]. http://arxiv.org/abs/2001.11190v3

    [52] Choi W, Dahiya N, Nadeem S. CIRDataset: a large-scale dataset for clinically-interpretable lung nodule radiomics and malignancy prediction[M]. Medical image computing and computer-assisted intervention-MICCAI 2022, 13435, 13-22(2022).

    [53] Attiyeh M A, Chakraborty J, Doussot A et al. Survival prediction in pancreatic ductal adenocarcinoma by quantitative computed tomography image analysis[J]. Annals of Surgical Oncology, 25, 1034-1042(2018).

    [54] Antonelli M, Reinke A, Bakas S et al. The medical segmentation decathlon[J]. Nature Communications, 13, 4128(2022).

    [55] Bilic P, Christ P, Li H B et al. The liver tumor segmentation benchmark (LiTS)[J]. Medical Image Analysis, 84, 102680(2023).

    [56] Bernard O, Lalande A, Zotti C et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?[J]. IEEE Transactions on Medical Imaging, 37, 2514-2525(2018).

    [57] Liu Q D, Dou Q, Yu L Q et al. MS-net: multi-site network for improving prostate segmentation with heterogeneous MRI data[J]. IEEE Transactions on Medical Imaging, 39, 2713-2724(2020).

    [58] Xiong Z H, Xia Q, Hu Z Q et al. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging[J]. Medical Image Analysis, 67, 101832(2021).

    [59] Bakas S, Akbari H, Sotiras A et al. Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features[J]. Scientific Data, 4, 170117(2017).

    [60] Bakas S, Reyes M, Jakab A et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge[EB/OL]. http://arxiv.org/abs/1811.02629v3

    [61] Codella N, Rotemberg V, Tschandl P et al. Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (ISIC)[EB/OL]. http://arxiv.org/abs/1902.03368v2

    [62] Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions[J]. Scientific Data, 5, 180161(2018).

    [63] Jaeger S, Candemir S, Antani S et al. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases[J]. Quantitative Imaging in Medicine and Surgery, 4, 475-477(2014).

    [64] Cheng D J, Qin Z Y, Jiang Z K et al. SAM on medical images: a comprehensive study on three prompt modes[EB/OL]. http://arxiv.org/abs/2305.00035v1

    [65] Boccardi M, Bocchetta M, Morency F C et al. Training labels for hippocampal segmentation based on the EADC-ADNI harmonized hippocampal protocol[J]. Alzheimer’s & Dementia, 11, 175-183(2015).

    [66] Gong H F, Chen J X, Chen G Q et al. Thyroid region prior guided attention for ultrasound segmentation of thyroid nodules[J]. Computers in Biology and Medicine, 155, 106389(2023).

    [67] Wang C B, Mahbod A, Ellinger I et al. FUSeg: the foot ulcer segmentation challenge[EB/OL]. http://arxiv.org/abs/2201.00414v1

    [68] Hu J J, Chen Y Y, Yi Z. Automated segmentation of macular edema in OCT using deep neural networks[J]. Medical Image Analysis, 55, 216-227(2019).

    [69] Tahir A M, Chowdhury M E H, Khandakar A et al. COVID-19 infection localization and severity grading from chest X-ray images[J]. Computers in Biology and Medicine, 139, 105002(2021).

    [70] Zhou Z, Zhang S J, Zhang X Y. Improved U-type neural network method for medical nuclear image segmentation[J]. Journal of Chinese Computer Systems, 44, 110-116(2023).

    [71] Li Y H, Hu M Z, Yang X F. Polyp-SAM: transfer SAM for polyp segmentation[EB/OL]. http://arxiv.org/abs/2305.00293v1

    [72] Wu J D, Ji W, Liu Y P et al. Medical SAM adapter: adapting segment anything model for medical image segmentation[EB/OL]. http://arxiv.org/abs/2304.12620v7

    [73] Chai S R, Jain R K, Teng S Y et al. Ladder fine-tuning approach for SAM integrating complementary network[EB/OL]. http://arxiv.org/abs/2306.12737v1

    [74] Zhang J W, Ma K, Kapse S et al. SAM-Path: a segment anything model for semantic segmentation in digital pathology[M]. Medical image computing and computer-assisted intervention- MICCAI 2023 workshops, 14393, 161-170(2023).

    [75] Amgad M, Elfandy H, Hussein H et al. Structured crowdsourcing enables convolutional segmentation of histology images[J]. Bioinformatics, 35, 3461-3467(2019).

    [76] Graham S, Chen H, Gamper J et al. MILD-Net: minimal information loss dilated network for gland instance segmentation in colon histology images[J]. Medical Image Analysis, 52, 199-211(2019).

    [77] Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation[M]. Medical image computing and computer-assisted intervention-MICCAI 2015, 9351, 234-241(2015).

    [78] Paranjape J N, Nair N G, Sikder S et al. AdaptiveSAM: towards efficient tuning of SAM for surgical scene segmentation[EB/OL]. http://arxiv.org/abs/2308.03726v1

    [79] Feng W J, Zhu L T, Yu L Q. Cheap lunch for medical image segmentation by fine-tuning SAM on few exemplars[EB/OL]. http://arxiv.org/abs/2308.14133v1

    [80] En Q, Guo Y H. Exemplar learning for medical image segmentation[EB/OL]. http://arxiv.org/abs/2204.01713v2

    [81] Hu J E, Shen Y L, Wallis P et al. LoRA: low-rank adaptation of large language models[EB/OL]. https://arxiv.org/abs/2106.09685

    [82] Cheng J L, Ye J, Deng Z Y et al. SAM-Med[EB/OL], 2. http://arxiv.org/abs/2308.16184v1

    [83] Zhang K D, Liu D. Customized segment anything model for medical image segmentation[EB/OL]. http://arxiv.org/abs/2304.13785v2

    [84] Wang Y N, Chen K, Yuan W M et al. SAMIHS: adaptation of segment anything model for intracranial hemorrhage segmentation[EB/OL]. http://arxiv.org/abs/2311.08190v1

    [85] Wei X B, Cao J J, Jin Y Z et al. I-MedSAM: implicit medical image segmentation with segment anything[EB/OL]. http://arxiv.org/abs/2311.17081v1

    [86] Gong S Z, Zhong Y, Ma W A et al. 3DSAM-adapter: holistic adaptation of SAM from 2D to 3D for promptable medical image segmentation[EB/OL]. http://arxiv.org/abs/2306.13465v1

    [87] Isensee F, Jaeger P F, Kohl S A A et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation[J]. Nature Methods, 18, 203-211(2021).

    [88] Chen C, Miao J Z, Wu D F et al. MA-SAM: modality-agnostic SAM adaptation for 3D medical image segmentation[EB/OL]. http://arxiv.org/abs/2309.08842v1

    [89] Li C Y, Khanduri P, Qiang Y et al. Auto-prompting SAM for mobile friendly 3D medical image segmentation[EB/OL]. http://arxiv.org/abs/2308.14936v2

    [90] Hatamizadeh A, Nath V, Tang Y C et al[M]. Swin UNETR: swin transformers for semantic segmentation of brain tumors in MRI images, 12962, 272-284(2022).

    [91] Li H, Liu H, Hu D W et al. Promise: prompt-driven 3D medical image segmentation using pretrained image foundation models[EB/OL]. http://arxiv.org/abs/2310.19721v3

    [92] Bui N T, Hoang D H, Tran M T et al. SAM[EB/OL]. http://arxiv.org/abs/2309.03493v4

    [93] Quan Q, Tang F H, Xu Z K et al. Slide-SAM: medical SAM meets sliding window[EB/OL]. https://arxiv.org/abs/2311.10121

    [94] Landman B, Xu Z B, Igelsias J et al. Multi-atlas labeling beyond the cranial vault: workshop and challenge[EB/OL]. https://www.synapse.org/Synapse:syn3193805/wiki/

    [95] Kavur A E, Gezer N S, Barış M et al. CHAOS Challenge - combined (CT-MR) healthy abdominal organ segmentation[J]. Medical Image Analysis, 69, 101950(2021).

    [96] Wang H Y, Guo S Z, Ye J et al. SAM-Med[EB/OL], 3. http://arxiv.org/abs/2310.15161v2

    [97] Du Y X, Bai F, Huang T J et al. SegVol: universal and interactive volumetric medical image segmentation[EB/OL]. http://arxiv.org/abs/2311.13385v3

    [98] Lei W H, Wei X, Zhang X F et al. MedLSAM: localize and segment anything model for 3D CT images[EB/OL]. http://arxiv.org/abs/2306.14752v3

    [99] Shaharabany T, Dahan A, Giryes R et al. AutoSAM: adapting SAM to medical images by overloading the prompt encoder[EB/OL]. http://arxiv.org/abs/2306.06370v1

    [100] Na S Y, Guo Y Z, Jiang F et al. Segment any cell: a SAM-based auto-prompting fine-tuning framework for nuclei segmentation[EB/OL]. http://arxiv.org/abs/2401.13220v1

    [101] Pandey S, Chen K F, Dam E B. Comprehensive multimodal segmentation in medical imaging: combining YOLOv8 with SAM and HQ-SAM models[C], 2592-2598(2023).

    [102] Jocher G, Chaurasia A, Qiu Jing. YOLO by ultralytics[EB/OL]. https://github.com/ultralytics/ultralytics

    [103] Cui C, Deng R N, Liu Q et al. All-in-SAM: from weak annotation to pixel-wise nuclei segmentation with prompt-based finetuning[EB/OL]. http://arxiv.org/abs/2307.00290v2

    [104] Dai H X, Ma C, Yan Z L et al. SAMAug: point prompt augmentation for segment anything model[EB/OL]. http://arxiv.org/abs/2307.01187v4

    [105] Lin T Y, Maire M, Belongie S et al. Microsoft COCO: common objects in context[EB/OL]. http://arxiv.org/abs/1405.0312v3

    [106] Li H, Liu H, Hu D W et al. Assessing test-time variability for interactive 3D medical image segmentation with diverse point prompts[EB/OL]. http://arxiv.org/abs/2311.07806v1

    [107] Yue X, Zhao Q, Li J Q et al. Morphology-enhanced CAM-guided SAM for weakly supervised breast lesion segmentation[EB/OL]. http://arxiv.org/abs/2311.11176v1

    [108] Yue W X, Zhang J, Hu K et al. SurgicalSAM: efficient class promptable surgical instrument segmentation[EB/OL]. http://arxiv.org/abs/2308.08746v2

    [109] Gal Y, Ghahramani Z. Dropout as a Bayesian approximation: representing model uncertainty in deep learning[C], 1050-1059(2016).

    [110] Zou K, Yuan X D, Shen X J et al. TBraTS: trusted brain tumor segmentation[M]. Medical image computing and computer assisted intervention-MICCAI 2022, 13438, 503-513(2022).

    [111] Li H, Nan Y, Del Ser J et al. Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation[J]. Neural Computing & Applications, 35, 22071-22085(2023).

    [112] Deng G Y, Zou K, Ren K et al. SAM-U: multi-box prompts triggered uncertainty estimation for reliable SAM in medical image[M]. Medical image computing and computer-assisted intervention- MICCAI 2023 workshops, 14394, 368-377(2023).

    [113] Xu Y S, Tang J Q, Men A D et al. EviPrompt: a training-free evidential prompt generation method for segment anything model in medical images[EB/OL]. http://arxiv.org/abs/2311.06400v1

    [114] Zhang Y C, Hu S Y, Jiang C et al. Segment anything model with uncertainty rectification for auto-prompting medical image segmentation[EB/OL]. https://arxiv.org/html/2311.10529v2

    [115] Zhang Y C, Cheng Y, Qi Y. SemiSAM: exploring SAM for enhancing semi-supervised medical image segmentation with extremely limited annotations[EB/OL]. http://arxiv.org/abs/2312.06316v1

    [116] Chen S Y, Lin L, Cheng P J et al. ASLseg: adapting SAM in the loop for semi-supervised liver tumor segmentation[EB/OL]. http://arxiv.org/abs/2312.07969v1

    [117] Wang C L, Li D X, Wang S C et al. SAMMed: a medical image annotation framework based on large vision model[EB/OL]. http://arxiv.org/abs/2307.05617v2

    [118] Zhang Y Z, Wang S, Zhou T et al. SQA-SAM: segmentation quality assessment for medical images utilizing the segment anything model[EB/OL]. http://arxiv.org/abs/2312.09899v1

    [119] Wang H H, Ye H Z, Xia Y et al. Leveraging SAM for single-source domain generalization in medical image segmentation[EB/OL]. http://arxiv.org/abs/2401.02076v1

    [120] Zhu W, Chen Y W, Nie S L et al. SAMMS: multi-modality deep learning with the foundation model for the prediction of cancer patient survival[C]. Turkiye, 3662-3668(2023).

    [121] Yin H T, Yue Y Y. Medical image fusion based on semisupervised learning and generative adversarial network[J]. Laser & Optoelectronics Progress, 59, 2215005(2022).

    [122] Jiang H Y, Gao M D, Liu Z R et al. GlanceSeg: real-time microaneurysm lesion segmentation with gaze-map-guided foundation model for early detection of diabetic retinopathy[EB/OL]. http://arxiv.org/abs/2311.08075v1

    [123] Porwal P, Pachade S, Kokare M et al. IDRiD: diabetic retinopathy-segmentation and grading challenge[J]. Medical Image Analysis, 59, 101561(2020).

    Tong Wu, Haoji Hu, Yang Feng, Qiong Luo, Dong Xu, Weizeng Zheng, Neng Jin, Chen Yang, Jincao Yao. Application of Segment Anything Model in Medical Image Segmentation[J]. Chinese Journal of Lasers, 2024, 51(21): 2107102
    Download Citation