Human vs horse or human vs not-human?

The ungraded lab notebook isn’t really testing for horse or human; it’s testing for human vs not-human:

if classes[0]>0.5:
print(i + " is a human")
else:
print(i + " is a horse")

Will training with horse images be any different than training with any non-human image?

Is this really a 3-class problem - human, horse, or neither? So wouldn’t we really want two output neurons, with a softmax activation function, and a sparse_categorical_crossentropy loss function as in the fashion classifier?

The notebook shows a binary classification problem. So, a single neuron is sufficient. Using an image that’s neither human nor horse is not good use of the model.

1 Like

@Patrick_Hennessy
This model is definitely a horse or human classifier. If you used it to classify an image of something else, like a truck, it would still give you an answer: horse or human. When using this model to predict the label for a picture of a truck, we are asking

Is this group of pixels closer to the group of pixels that generally represent a horse, or closer to the group of pixels that generally represent a human?

The truck pixels probably aren’t close to either horse pixels or human pixels, but the model will still output a value that is closer to 0 or 1. Similarly, I could ask

Is the number 1155 closer to -2.4 or 1.8?

1155 isn’t close to either of them, but it’s certainly closer to 1.8.

Neural networks always answer our questions, even if the answers aren’t very good. This can cause problems. Several years ago, a police department (I think Portland’s, but I may be misremembering) purchased a facial recognition classifier from Amazon. The classifier read surveillance video and returned mugshots (or maybe driver’s license photos) of the closest matches to the person in the video. The classifier also returned the confidence interval of each prediction, and the engineers instructed the police to disregard any predictions below a certain confidence. The police ignored that instruction and assumed that predicted faces were perfect matches to criminals in surveillance videos. They ultimately arrested and charged many innocent people and (I believe) ultimately lost a large civil rights lawsuit over the practice.

If you wanted to train a human vs not-human classifier, you would need a group of human images (human = 1) and a group of various different images: trucks, cats, tangerines, etc. (not-human = 0)