Abstract
Pancreatic cancer is one of the most aggressive malignant solid tumors, and achieving early screening and diagnosis is the key to improving patient survival rates. Although deep learning has made significant progress in medical image analysis, the classification of pancreatic cancer computed tomography (CT) images remains highly challenging due to subtle inter-lesion differences and overlapping category distributions. Moreover, existing methods rely heavily on large amounts of high-quality annotations, which can lead to overfitting and hinder the effective exploitation of structural and semantic features within images. To address these challenges, we propose a Contrastive Swin Transformer with Masked Autoencoder (CSTMA) for pancreatic cancer CT image classification in the Internet of Medical Things (IoMT) environment. CSTMA leverages contrastive learning to enhance feature discriminability, while its multi-task self-supervised architecture based on Masked Autoencoder guides the model to learn both structural and semantic representations. We conduct comprehensive experiments on pancreatic cancer CT image classification tasks, and the results demonstrate that the proposed CSTMA model achieves superior performance across multiple evaluation metrics.
| Original language | English |
|---|---|
| Journal | IEEE Internet of Things Journal |
| DOIs | |
| State | Accepted/In press - 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Computed Tomography Images
- Contrastive Swin Transformer
- Internet of Medical Things
- Masked Autoencoder
- Pancreatic Cancer Classification
Fingerprint
Dive into the research topics of 'Contrastive Swin Transformer with Masked Autoencoder for Pancreatic Cancer Computed Tomography Image Classification in the Internet of Medical Things'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver