Back to
Projects List
Evaluation of AI methods for MRI segmentation on IDC data
Key Investigators
- Cosmin Ciausu (Brigham and Women's Hospital, US)
- Deepa Krishnaswamy (Brigham and Women's Hospital, US)
- Megha Kalia (Brigham and Women's Hospital, US)
- Andrey Fedorov (Brigham and Women's Hospital, US)
Presenter location: In-person
Project Description
We previously studied the application of a contrast-agnostic approach for MRI/CT abdomen organs segmentation, based on the generation of synthetic data. This synthetic data was then further used as a training set for a fully-supervised U-Net network.
Since this study was performed, other methods aiming to segment MR abdominal organs have been published. Our goal is to evaluate these new methods on IDC MR abdominal-focused data and see how it compares to our method.
Objective
- Evaluate performance of MR abdominal organs methods on IDC data.
- Get feedback on our own method.
Approach and Plan
- Select a subset of IDC MR abdominal-focused IDC data.
- Create evaluation notebooks for newly published methods on this subset.
- Compare to our method.
Progress and Next Steps
- GitHub repo for colab notebooks for evaluation of MR segmentation methods
- Look into methods like STAPLE for consensus of segmentations - WIP
- Perform a comparison of the methods to ground truth - WIP
Illustrations
Comparison of MR segmentation methods on a subject from AMOS dataset:
- Top left = ground truth expert segmentations
- Top right = our approach
- Bottom left = TotalSegmentator
- Bottom middle = MRSegmentator
- Bottom right = our approach
Comparison of MR segmentation methods on a subject IDC TCGA-LIHC subject:
- 3D = our approach
- Left = our approach
- Middle = TotalSegmentator
- Right = MRSegmentator
Comparison of MR segmentations on a subject from TotalSegmentator:
(ground truth in bold)
- Top row = our approach
- Middle row = TotalSegmentator
- Bottom row = MRSegmentator
Dice distributions between AI segmentations and expert annotations on AMOS22 MR training split.
Background and References
- Our method
- New published methods