Hello,
I have some questions and concerns about the image preprocessing approach used in Week 1’s lab.
Regarding the normalization method:
The lab uses ImageDataGenerator with samplewise normalization (centering and std normalization per image). I noticed the normalized images look very contrasted, and I’m uncertain if this is the intended result
My concern is that this approach is very sensitive to outliers (e.g background and medical devices) and I’m skeptical this is appropriate for medical imaging applications.
From some reading, I see more common approaches would be:
-
Percentile normalization (seems to deal with the issue from the used method)
-
Windowing (especially for CT/MRI, which we use later in the course)
Is the aggressive contrast in the normalized images the expected output, or is there a visualization issue? Also, is this sample-wise standardization approach actually commonly used in medical imaging AI, or are the methods I mentioned above more standard?
If the sample-wise standardization is not typical, the course should be updated to reflect industry-standard practices.
General course feedback:
As far as I can remember, this is the only time pre-processing/normalization is mentioned, and for those coming without a background in Computer Vision, this is extremely lacking. To be honest, the whole content of the course feels lacking, I have this constant impression that topics are simply mentioned, but that we don’t get to really learn from the content in the videos. This course as a whole feels miles apart from other courses, as for example the Deep Learning specialization.
In the topic of image pre-processing, a suggestion is to add a video session explaining common image normalization approaches and when they come in handy (similar to what is done for data augmentation).
