Detected a horse instead of a human

Below is a example of a wrong detection. This is a picture of a human. The key feature is a collarbone. Horses don’t have a collarbone!
https://pixabay.com/photos/government-bunker-ahrweiler-759509/

The machine learning algorithm derives some rules, which are not necessary to be a key for the correct detection. It may erroneously neglect a key feature. We should have an option to point to key features instead of asking the training procedure to derive them.

Epoch 15/15
8/8 [==============================] - 6s 696ms/step - loss: 0.0153 - accuracy: 0.9933

[0.01196291]
government-bunker-gfc13bc7d9_640.jpg is a horse

The ML model outputs probabilities, they are not always right. The essence of an ML model is to learn from data, its not easy to steer its learning in the middle of the training, after all it defeats the purpose of itself learning from data and also may not be advisable to do so as well, because if you “fasten it from one side, it loosens from the other side” as it happens with many systems.

In the tensorflow advanced techniques specialization there are a few techniques to be used that show which parts of the images are used for the prediction and consequently one can feed images that help this process to learn the right features.

1 Like