Skip to main navigation Skip to search Skip to main content

Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation

  • Hairong Shi
  • , Songhao Han
  • , Shaofei Huang*
  • , Yue Liao
  • , Guanbin Li*
  • , Xiangxing Kong
  • , Hua Zhu
  • , Xiaomu Wang
  • , Si Liu
  • *Corresponding author for this work
  • Beihang University
  • CAS - Institute of Information Engineering
  • Chinese University of Hong Kong
  • Sun Yat-Sen University
  • Peking University
  • Nanjing University

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Tumor lesion segmentation on CT or MRI images plays a critical role in cancer diagnosis and treatment planning. Considering the inherent differences in tumor lesion segmentation data across various medical imaging modalities and equipment, integrating medical knowledge into the Segment Anything Model (SAM) presents promising capability due to its versatility and generalization potential. Recent studies have attempted to enhance SAM with medical expertise by pre-training on large-scale medical segmentation datasets. However, challenges still exist in 3D tumor lesion segmentation owing to tumor complexity and the imbalance in foreground and background regions. Therefore, we introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation. We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks, facilitating the generation of more precise segmentation masks. Furthermore, an iterative refinement scheme is implemented in M-SAM to refine the segmentation masks progressively, leading to improved performance. Extensive experiments on seven tumor lesion segmentation datasets indicate that our M-SAM not only achieves high segmentation accuracy but also exhibits robust generalization. The code is available at https://github.com/nanase1025/M-SAM.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2024 - 27th International Conference, Proceedings
EditorsMarius George Linguraru, Qi Dou, Aasa Feragen, Stamatia Giannarou, Ben Glocker, Karim Lekadir, Julia A. Schnabel
PublisherSpringer Science and Business Media Deutschland GmbH
Pages403-413
Number of pages11
ISBN (Print)9783031721106
DOIs
StatePublished - 2024
Event27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024 - Marrakesh, Morocco
Duration: 6 Oct 202410 Oct 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume15008 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
Country/TerritoryMorocco
CityMarrakesh
Period6/10/2410/10/24

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Medical Image Segmentation
  • Segment Anything Model
  • Tumor Lesion Segmentation

Fingerprint

Dive into the research topics of 'Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation'. Together they form a unique fingerprint.

Cite this