CrossMoDA 2023

CrossMoDA 2023(跨模态域适应)是目前医学影像领域规模最大的跨模态域适应数据集。该数据集聚焦于前庭神经鞘瘤(VS)随访和治疗规划中涉及的关键结构分割:肿瘤与耳蜗。自2021年首届举办以来,该挑战赛已成功举办三届,本文介绍的是最新版CrossMoDA 2023数据集。与前两届(CrossMoDA 2021、CrossMoDA 2022)专注于两类分割任务(肿瘤和耳蜗)不同,2023年挑战赛进一步将肿瘤细分为耳蜗内和耳蜗外区域,形成三分类分割任务。本赛事的一大特点是采用无监督跨模态分割,训练使用的源数据集和目标数据集分别为术前术后非配对的带标注增强T1扫描和不带标注的高分辨率T2扫描。训练集提供来自3个不同机构的227例带标注增强T1病例和295例不带标注T2病例,验证集包含96例不带标注T2病例,测试集则由365例不带标注T2病例组成。

toujingbu
可视化图片
CrossMoDA_2023_0.png
CrossMoDA_2023_0.png
CrossMoDA_2023_1.webp
CrossMoDA_2023_1.webp
CrossMoDA_2023_2.webp
CrossMoDA_2023_2.webp
CrossMoDA_2023_3.webp
CrossMoDA_2023_3.webp
数据集元信息
维度3D
模态mri
任务类型segmentation
解剖结构Vestibular Schwannoma, cochlea
解剖区域Head
类别数3
数据量983
文件格式.nii.gz
文件结构
crossMoDA
│
├── crossmoda23_training
│   ├── TrainingSource
│   │   ├── crossmoda2023_etz_1_ceT1.nii.gz
│   │   ├── crossmoda2023_etz_1_Label.nii.gz
│   │   └── ...
│   │
│   └── TrainingTarget
│       ├── crossmoda2023_etz_106_T2.nii.gz
│       ├── crossmoda2023_etz_107_T2.nii.gz
│       └── ...
│
└── crossmoda23_validation
    ├── crossmoda2023_etz_211_T2.nii.gz
    ├── crossmoda2023_etz_212_T2.nii.gz
    └── ...
图像尺寸统计
统计类型 间距 (mm) 尺寸
最小值 (0.35, 0.35, 0.78) (232, 30, 11)
中位值 (0.63, 0.63, 1.5) (288, 288, 50)
最大值 (0.86, 0.86, 3.5) (512, 512, 256)
引用
@article{DORENT2023102628,
title = {CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation},
journal = {Medical Image Analysis},
volume = {83},
pages = {102628},
year = {2023},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2022.102628},
url = {https://www.sciencedirect.com/science/article/pii/S1361841522002560},
author = {Reuben Dorent and Aaron Kujawa and Marina Ivory and Spyridon Bakas and Nicola Rieke and Samuel Joutard and Ben Glocker and Jorge Cardoso and Marc Modat and Kayhan Batmanghelich and Arseniy Belkov and Maria Baldeon Calisto and Jae Won Choi and Benoit M. Dawant and Hexin Dong and Sergio Escalera and Yubo Fan and Lasse Hansen and Mattias P. Heinrich and Smriti Joshi and Victoriya Kashtanova and Hyeon Gyu Kim and Satoshi Kondo and Christian N. Kruse and Susana K. Lai-Yuen and Hao Li and Han Liu and Buntheng Ly and Ipek Oguz and Hyungseob Shin and Boris Shirokikh and Zixian Su and Guotai Wang and Jianghao Wu and Yanwu Xu and Kai Yao and Li Zhang and Sébastien Ourselin and Jonathan Shapey and Tom Vercauteren},
keywords = {Domain adaptation, Segmentation, Vestibular schwannoma},
abstract = {Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score — VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score — VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.}
}
来源信息

官方网站:
访问官网

下载链接:

登录后下载
需要登录并获得知识星球权限

相关论文:
查看论文

发布日期: April, 2023

统计信息

创建时间: 2025-09-10 08:41

更新时间: 2025-09-10 09:49