Hello, can anyone help please, I am training a yolov8 model to detect a dental lesion from dental X-ray images. here are some samples of my data:
after training the model I noticed that there is a problem with the sensitivity, so the model miss rate is still high and it drops a lot of detections as you see in these results.
I had a total of 221 images with exactly 1020 target objects in them so I had to do a little bit of image augmentation (only pixel level transforms) like rand histogram equalizing, blurring, and contrast change. From the original image, I got 6 augmented images so the final is 221 + 221*6 = 1547 images with their label text files divided as follows:
train: 60%, val 20%, and test 20%
the hyperparameters for training will be found below:
please if you have any suggestions for better results share them with me.
thank you in advance.
1 Like
did you label your data for this model?
yup dental pathologist helped me with the data annotation, the sample images above are actually of the labeled images I have.
can I know when you split the dataset, you did random selection?
that part you mentioned you bodied 6 images out of the 221 images !!! and then you added the same dataset to the 221 images with your 221*6 is creating issue.
Have a look at this link, although your data augmentation idea was quite right but you should not have selected 6 images and split the dataset.
Hope it helps!!!
Regards
DP
thank you, I will check it and reply to you
okay, I still can not find out where the problem is.
I understand that the more image we get from augmentation the better it will be for training, that is why I created 6 (6 is randomly chosen, no reasons for it) different augmented images from each original image each with different blurring and contrast levels.
The reason why I have chosen pixel level transforms not spatial lever transforms, is that panoramic xrays have standards, so why do I need to train my model on rotated images if I am sure that we will never ask it later to do detection of rotated images.
please note that I am trying to segment the lesions, not just classify them, that is why I think it will need more images to learn.
Hello Zidan,
Based on your post, I think one need to keep improvising the data augmentation part until you want to get better accuracy.
The next issue I noticed is with the splitting of data.
you got final 1547 images and splitting that into 60-20-20 is another issue. Please read the link properly have sent, you will understand. Make the pointers on what difference your model and the link I sent you differs, and then work on it. if you still are getting doubt, let me know.
Regards
DP