I uploaded my personal photos and my friends photos to see if the model can detect whether or not the person I uploaded is a human or horse.
Pictures from Pixabay and stock images seem to work, but when I take a clear photo (no background) of my friend, it detects him as a horse.
I copied the code line by line, and got an accuracy of 98%, so I’m not sure what’s happening.
Based on training data, the model has learnt certain features to differentiate between a horse and a human. There are 2 cases where the model makes an incorrect prediction:
- The testing data is different from training dataset. Please see this link.
- The photo has features that look like a horse to the model.
If incorrect predictions are of concern, retrain your model on images that look like your target picture. You can also augment images (covered in the specialization) to let the model see variations of training images to learn better.
There’s always a small possibility that you are misinterpreting the predicted class or have a bug in your code. So, check that as well.
1 Like