Why am I getting the same accuracy on validation data? - Python - Computer Vision - Deep Learning

I’m getting the same accuracy on validation data, and the accuracy on the training data varies little in every epoch.
The training data consists of 19670 images (14445: class 0, 5225: class 1). The validation data consists of 4918 images (3612: class 0, 1306: class 1).
Due to class imbalance, I applied calculate class weight so that the penalty for the minority class is higher.
However, the accuracy is the same on the validation data and the loss does not vary much in every epoch.

I applied data augmentation to all the train data. Also, I am using VGG16, unfreezed the last 5 layers and added some dense layers to the network.

I was changing the learning rate values, but I don’t get any significant improvements, and the results are still the same. It follows the same pattern in the accuracy of the training and validation data, there are no improvements and the values ​​are repeated.

The neural network consists of the convolutional base of VGG16, followed by a GlobalAveragePooling layer, one Dropout layer (0.3) and 2 dense layers (100 and 1 neurons). Optimizer: Adam

I wanna know why this is happening. I was changing the hyperparameters, such as the learning rate, number of neurons and number of layers. Also, I applied class_weight for the minority class, but I don’t get any significant improvements.

Please share your notebook and a link to the dataset.

Thanks for the reply! I could solve it. It was because of the number of unfreeze layers: I decreased the amount of unfreeze layers, and the problem disappeared.
I think it was because the model without fine-tuning already worked fine by itself, but I unfreezed only the last convolutional layer and the problem disappeared. Or is there another reason why the problem disappeared?

I don’t know.

1 Like