Why would validation loss keep increasing when fine-tuning the pretrained ResNet50 model?

I’m fine-tuning the pre-trained ResNet50 model on the ADHD200 structural MRI dataset. I observed that the validation loss starts to increase and it keeps on increasing after the first 10-15 epochs. I know that there is a case of overfitting here but this increase in the validation loss makes me think that the model is not learning/there is something wrong.

Background:

  1. I’m working with the ADHD200 dataset. I have balanced the dataset to have 456 train, 114 validation, and 154 test samples.
  2. Since ResNet50 is designed for 2D images and I have 3D brain MRI scans, I have extracted 2D slices from each MRI and applied the model on the slices.
  3. I have freezed all layers except the fully connected layers which are being fine-tuned for a binary classification task ADHD vs Healthy.

check if the dataset samples you are working has more healthy samples than the adhd or vice-versa. This is another reason of unbalanced dataset.

Also val loss being higher indicates underfit dataset, so is your validation dataset enough.

Also seeing the ratio from train:val:test::456:114:156, you surely have very less data.

Explore data augmentation?

the original dataset do have imbalance that’s why i balanced it to have equal number of healthy and adhd samples

i just found it very stange that the val loss would keep on increasing and is the lowest only at the first epoch - it made me think if the model is even learning

as for the dataset being underfit, i’m thinking i can try lower versions like resnet18 or resnet34 as they are less complex

this is for a college project and my supervisor has instructed me not to perform data augmentation since i’m working with medical data