YOLOV8 no detections

I have trained a YOLOV8 model on a custom data set successfully.
I try predicting it on a new dataset, it is showing no detections. I then predict it with my traindatset, it is still showing no detections.
What is the issue please?


I’m not an expert in YOLO, but it seems like the first obvious question to answer is what do you mean by “successful training” if the accuracy of your model on your training set is so low? At least I assume that some of your training samples have objects that should be detected, right? So your resulting trained model is successful only on samples that are completely negative (no objects present). Presumably that’s a pretty small subset of your training set resulting in pretty low overall training accuracy …

What am I missing here?

1 Like

Sorry for the confusion, i use my training set as my validation set. But it is not detecting anything at all. Secondly, its only showing a point in my loss graph.
Below are photos to makr it more clearer

1 Like

I have never tried to train YOLO myself and only know what was covered in the YOLO assignment in DLS C4 W3. I think you’re using a more advanced version of YOLO. But I think what this says is that there is something fundamentally wrong with how you have set things up. The training is basically not working at all. I assume you’re starting with some sort of pre-trained model and then doing transfer learning with your own specific dataset. E.g. maybe you didn’t get the pre-existing model loaded or you’re using an incompatible version of TF or … But these are just general suggestions based on no actual personal experience with YOLOv8. I don’t really know how to advise you other than to do web searches for tutorials about how to do transfer learning using YOLOv8.

Sorry, but maybe you’ll get lucky and there are other folks listening here who have been through a similar exercise.

Thank you very much Sir.
I think the problem might be due to low dataset I used for the training, I used only 20 because of the high processor it need.
I will try with maybe 100.

Are you training from scratch or are you taking a pretrained model and then doing additional training with your data? I think training YOLO from scratch is serious business and if you are starting from a blank slate, I would be very surprised if you could get any meaningful results with a training dataset that small. My guess is that 100 images is not a “large” training set in this kind of space.

Did you look at the YOLOv8 website? Surely they must discuss this type of issue there.

Yeah i am building from scratch, I also think the same, maybe the dataset is too small. I will check the website immediately and give you feedback.
Thanks for the concern.

Tagging someone who could help you in this @ai_curious, he has some great insight on YOLO algorithm.

this is the culprit. How can a model training detect something when your training and validation set is same.

Hello Kevin @ai_curious,

Can you give your reviews on this.


I have never studied the Ultralytics object detection platform that they call YOLO (the people who did work on the original trunk of YOLO versions aren’t thrilled of the appropriation of the brand/name. From my brief review a couple of years ago my takeaway was that the Ultralytics code has a fundamentally different architecture and approach and little of what you learn from the lectures and exercises from this course - based on YOLO v2 from the original author- directly applies). As suggested by @paulinpaloalto above, if Utralytics’ platform is what you want to use, you’ll have to look to their doc or elsewhere on the interweb for support.

I can say that training any of the first 3 versions of YOLO from scratch is not something one can just do trivially. IIRC, the original YOLO pipeline was split into two, with classification and localization trained separately - localization was a transfer learning on a smaller network trained on classification. It made extensive use of augmentation, since objects need to be trained on in many of the grid cell locations and anchor box/ aspect ratios, not just the default one object per training image many data sets provide. It also used some 3 orders of magnitude more input images than is discussed in the thread above. Training runs took a week on high end state of the art GPUs.

About 5 years ago I spent about 6 months trying to train a YOLO v2 from scratch. Learned a lot, but never got a working model. Some of my struggles are documented on this platform. @esssyjr if you succeed, you should definitely share your experience with the community.

1 Like

I agree that this is not a correct approach, but that doesn’t mean that the training can’t learn anything at least on general principles. It just means you won’t be able to tell if your model is overfitting.

But I think we are all agreeing that is not the top level problem here. The top level problem is at least insufficient training data and of course there may also be problems in how the training logic was constructed.

Have you considered the Transfer Learning approach? If you are limited in your processing power and dataset size, it’s at least worth considering. Are there pretrained YOLOv8 models out there that you can use as the starting point? There are various points in the DeepLearning.AI courses that cover the ideas of Transfer Learning. The one I’ve been through is in DLS C4 W2. Definitely worth a look if you are not already familiar with Transfer Learning.

1 Like

Hello Paul,

I stated the same Paul as I mentioned something and not anything.

For model training to detect something, the validation dataset is the checkpoint to know if there is anything of significance in the model created. Hence no detections.

The reason I could point this out is because of his graph image between the various dataset, as the model is training the same data(validation data), it could not detect anything of significance other than similar pictures and similar inputs and parameters.

@esssyjr I don’t know if you need any help, but one of the other AI learner here has used Ultralytics dataset to create his own model. I am tagging him @Honza_Zbirovsky. Probably you could have something to say about how you generated data from the given dataset you used.


Thank you for your concern
YOLOV8 works like that, you can use same training data to validate. It gives some insights on the model.

Hello sir, Thank you for your concern.
Training in YOLOV8 have two options, you can train with their pre trained model or build your own model from scrath. In building from scratch, you are to use your annotated images and their labels. In my case, I used only 20 pictures to train due to the low processing power of my computer. May be the model is not good enough to detect anything, thats why I am getting no detections.
Secondly, what is your stand in me using YOLOV8 or other YOLO version, I want to build my career in computer vision so I know its essential to make this choice as early as possible.
Thank you once again

Hi @Deepti_Prasad.

Thanks for tagging me. I used YOLO8 just for some trials, but never trained that completely from scratch.

Anyway Ultralytics documentation is pretty straight forward w/ many code examples.

@esssyjr have you looked here? This could definitely help you:

Here GitHub: GitHub - ultralytics/ultralytics: NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
Here Documentation: Python - Ultralytics YOLOv8 Docs


Surely sir,
I am looking into that

Thank you Deepti_prasad, I will wait for his insights.

If I got you, the second one is the better approach right?

@Honza_Zbirovsky sir please what is your advice on my second statement?
Which one will work best for me?

Actually, any tool you use today will be obsolete in six months, when a newer version or a better tool becomes available.

You need to be able to move fast and adapt. So just pick one that is easy to use while you are learning.

There are no permanent decisions in Machine Learning.