Skip to main navigation Skip to search Skip to main content

UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration

  • Runshi Zhang
  • , Hao Mo
  • , Junchen Wang*
  • , Bimeng Jie
  • , Yang He
  • , Nenghao Jin
  • , Liang Zhu
  • *Corresponding author for this work
  • Beihang University
  • Peking University
  • General Hospital of People's Liberation Army

Research output: Contribution to journalArticlepeer-review

Abstract

Complicated image registration is a key issue in medical image analysis, and deep learning-based methods have achieved better results than traditional methods. The methods include ConvNet-based and Transformer-based methods. Although ConvNets can effectively utilize local information to reduce redundancy via small neighborhood convolution, the limited receptive field results in the inability to capture global dependencies. Transformers can establish long-distance dependencies via a self-attention mechanism; however, the intense calculation of the relationships among all tokens leads to high redundancy. We propose a novel unsupervised image registration method named the unified Transformer and superresolution (UTSRMorph) network, which can enhance feature representation learning in the encoder and generate detailed displacement fields in the decoder to overcome these problems. We first propose a fusion attention block to integrate the advantages of ConvNets and Transformers, which inserts a ConvNet-based channel attention module into a multihead self-attention module. The overlapping attention block, a novel cross-attention method, uses overlapping windows to obtain abundant correlations with match information of a pair of images. Then, the blocks are flexibly stacked into a new powerful encoder. The decoder generation process of a high-resolution deformation displacement field from low-resolution features is considered as a superresolution process. Specifically, the superresolution module was employed to replace interpolation upsampling, which can overcome feature degradation. UTSRMorph was compared to state-of-the-art registration methods in the 3D brain MR (OASIS, IXI) and MR-CT datasets (abdomen, craniomaxillofacial). The qualitative and quantitative results indicate that UTSRMorph achieves relatively better performance. The code and datasets are publicly available at https://github.com/Runshi-Zhang/UTSRMorph.

Original languageEnglish
Pages (from-to)891-902
Number of pages12
JournalIEEE Transactions on Medical Imaging
Volume44
Issue number2
DOIs
StatePublished - 2025

Keywords

  • ConvNets
  • Deformable image registration
  • Transformer
  • cross-attention
  • superresolution

Fingerprint

Dive into the research topics of 'UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration'. Together they form a unique fingerprint.

Cite this