DICOM in DEEP LEARING

Hello. Hope you are doing fine. I am an undergrad student . I am working on a project. Can you guide me in a query. I have CT scan dataset in the form of DICOM meaning for a single CT scan I have 100 layers. Should I feed the whole data into my deep learning model or should I feed the single malicious layer into my model. Thankyou

1 Like

What task is your model trying to perform?

detecting tumor from ct scan so patient doesnt have to take mri.To summarize it.

I had done a similar project, where I dealt with DICOM images. To train a model, you will have to convert it to jpg file. But as you said, if you have a method of extracting a layer from a DICOM image, why not get multiple layers, or say prominent feature inclusive layers to train your model. It can make your model better in predicting, from my viewpoint.

1 Like

You will need a set of training examples that are balanced between normal and malicious cases.

OK. But i have dicom format images. For a single case(CT scan).111 dicom images in the form of layers add upto to make it.i wanted guidance on how to deal with it and what should be the procedure to it.

ok. But for a single ct scan. i have 110 images of dicom.They are in the form of layers.Did you had that too.Or you have any ideas.Because converting is easy,But 110 images per ct scan.what should be my procedure

I have no idea what a DICOM image is. Sorry.

NP.thanks for your time.Is there any way i can reach out to you or connect to you.that will be helpful.

If you’re asking me: I’m only available via the forum.

Ok.Thanks

Have you looked at the AI for Healthcare course? IIRC there is a programming exercise that uses DICOM brain scans.

1 Like

Hi @Smoooth

for this query it would adding whole data as selecting only the single malicious layer would be considered as information bias for your model and for the model to be able to detect a malicious layer would not have similar results if you try to train your model with different ct scans which you didn’t you use while making the model.

kindly provide information about what kind of cancer or what part of CT scans are these? is it specific to brain only or considered as whole body?

what kind of tumor detection you are doing? per se is body part specified or organ specified as tumor classification depends on the information you provide.

cancer stages are based on T(tumor-presence locally in what magnitude), N(Node involvement-lympathic inovlvement) and M(Metastatis- spread of the tumor), and again these 3 are divided in multiple stages based on tumor involvement, so choosing a single malicious layer will not justify your model, so for further correlative guidance kindly provide which tumor classification and what kind of CT scans datasets you have.

Regards
DP

Nops.Can you provide me with a link.Thanks

I have ct scans of brain. And i want to use them to detect any unhealthy symptoms based on these ct scans.

hi @Smoooth

Go through the below link

the one mentor mentioning about AI and healthcare model about CT scan image cannot be shared here as those assignments are part of graded assignment and learners who have purchased or completed AI for medicine specialisation can only access to those assignments.

i had done a spinal degenerative classification model which uses CT scan images, which we then format the data based on specificity or sensitivity of features which classifies the degenerative disease in spine. Here we used the images from dicom format files and then feed these models with disease specified image to improve its detection.

My reason of explaining to you about this is because you make sure your all images are same format per se pixels are matching in dimensions and then use the images for your model in whatever way you have planned.

regards
DP

1 Like

Thankyou. Your knowledge was very helpful.

I am sharing a part of the code where how these images were used in the classification

Basically, here using the same dicom images in array form, then normalised and then converted the images (notice here x and y are coordinate values related to the seriesID and series Description given in the Dicom metadata files.

def display_coordinates_on_image(c, i, title):
center_coordinates = (int(c[‘x’]), int(c[‘y’]))
radius = 10
color = (255, 0, 0) # Red color in BGR
thickness = 2
IMG = i[‘dicom’].pixel_array
IMG_normalized = cv2.normalize(IMG, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)

IMG_with_circle = cv2.circle(IMG_normalized.copy(), center_coordinates, radius, color, thickness)

# Convert the image from BGR to RGB for correct color display in matplotlib
IMG_with_circle = cv2.cvtColor(IMG_with_circle, cv2.COLOR_BGR2RGB)

# Display the image
plt.imshow(IMG_with_circle)
plt.axis('off')  # Turn off axis numbers and ticks
plt.title(title)
plt.show()

Further the classification was done based on the value of amount of formal narrowing values and other findings which designates what value is assignment to degenerative stages 1 and further.

P.S. this is only for your review practices as I have not seen your metadata I cannot concur that you could follow the same steps mentions above.

Regards
DP

I understood the basic classification part.For one person his data of all dicom image is classified and converted .And because i am new here that is why i dont have any metadata here.But ill try to understand your code.Thanks

Hello,

I am not familiar with the DICOM format, but based on your explanation of its structure, if you can read the entire DICOM sample into a multidimensional array, you can use 3D Convolutions to process your data. This approach avoids engineering the data and preserves the correlations within it. If I understand correctly, a single DICOM sample should produce n images with dimensions (height, width, channels), which is suitable for 3D Convolutions. This method allows your model to extract spatial, temporal and spatiotemporal features without manual feature selection that might disrupt the data’s interconnections. You can then incorporate transformers for better local and global representations. There are some suitable architectures that you can also use.