I made a deep learning model and after 100 epochs the graph of validation accuracy is having too many oscillations with every epoch. How do we normalize the validation accuracy. Although the validation loss is satisfactory.

cl

this is based on image contrast, and my aim of data augmentation was pointed on your imbalance dataset that you mentioned for some classes are only into 900 images.

So how did you address this issue?

m sorry I will need better information to understand how you labelled your data, still unclear

labelling of data is done with respect to their containing folder

Actually the original dataset contains 16 classes but i dropped the samples that are under 10%. Although i have not generated or curated any image using flip and rotation. Do i need to perform it and generate new data… In majority class we have 2000 samples while in minority class we have 956 samples. Do i need to perform augmentation and generate new data?

can I know how many classes come under minority class out of the 8 selected classes for your multi classification of cancer.

Your removed almost 8 classes based on its distribution being less than 10%?
usually if your data is in the range 100000, then removing any features less than 10% would be considered.

but you are having 10000 and reducing features based on distribution less than 10% is kind of missing some important features for your model to learn. You could have removed features that were in the range of 1 or 2% and kept other features.

Also I noticed in your last comment you mention 2000 images for majority class and 956 for minority class, but you stated you have total of 10000 images. what about remaining 8000?

when you do the labelling, we usually select number of images which has mix of images with features or class and images that do not have these features based on randomness.

Then augment the original images and the labels into same measurements or dimensions.

then as you had less number of minority class images, you could have used image data generator, to increase the over all data major and minor class. I am not saying you to flip or rotate this image, I leave it to you as you have access to your data.you could scale them based width scale or nearest fill, or just use flip. but remember when you add these features, do not make too much variation, like width size max 0.1, ofcourse flip and rotation would be based on yes or no. but here the mode also can play factor as you use class as you get to create categorical class based on the data available, increasing your overall data and as well as minority class data image.

I have dropped those classes because they are not having data. They all together having only 200-300 samples. and they all are from benign class. So i decided to drop them. And for rest 8000 we have other classes that contains the rest of the data.

Thanks for your fruitful comments
Shall i have to remove the data that is only 1% or 2% and rest must be included???