How do I use NifTI images for deep learning?

Currently, I am involved in a project that focuses on ViTs (Vision Transformers) and TransUNets (U-Net + ViT) for medical image segmentation. I have read a lot of research papers (on medical image segmentation), and none explain how an .nii file is used.

I did come across a python script someone wrote to convert .nii files to PNG files. However, it converts all the slices of the input image (.nii file contains various “slices” of the organ that was taken during the scan duration, i.e. if a scan lasts 3 seconds and the MRI machine scans every 100ms, we will have 30 images). Link to the GitHub repo.

Are there any resources (videos/papers/tutorials) that explain the process? Cheers and thanks in advance.

P.S.: I am using the medical decathlon’s brain tumor dataset.

https://nipy.org/nibabel/

This package provides read +/- write access to some common medical and neuroimaging file formats, including: ANALYZE (plain, SPM99, SPM2 and later), GIFTI, NIfTI1, NIfTI2, CIFTI-2, MINC1, MINC2, AFNI BRIK/HEAD, MGH and ECAT as well as Philips PAR/REC.

One of the AI for Medicine programming exercises works with MRI data

1 Like

So, I came across this notebook on kaggle that explains what the dimension are about. Found it really helpful!