I am wondering how to leverage the trained mobilenet model to predict my own image.
I tried it in several ways, but all got miserable results. Hence come here to post my doubt.
#1 use keras.utils load_img, img_to_array to input custom image
Yet the prediction score is greater than 0.5. It should not be recognized as an alpaca.
2 use numpy.array and Image.open and test on a not alpaca image
The score is also greater than 0.5 ——
Ironically, if I use the polished validation data set, they are fine.
Since the last layer is a one-neuron dense layer with default linear activation. How does the model measure accuracy when it is fitting? by how close the final output is to 1?
It would be much easier to comprehend if the final layer is a softmax layer.
Please look at the
loss. It has a
from_logits argument set to
True. This help convert linear output to probability.
You are almost correct about the last layer. Since there is only 1 neuron in the output layer and this is a binary classificiation problem, output activation of
sigmoid is sufficient.
image_dataset_from_directory doesn’t TF by default assign label 0 to the first class and label 1 to the second class (etc) based on alphanumeric order of the subdirectory names ? If so, then a
not_alpaca ground truth label is 1, and so is your custom image prediction. But it seems like you think it isn’t predicting well. Am I missing something?
The score of an alpaca image is almost the same.
Thanks for pointing out the usage of from_logits. Now I understand the fitting process. The remaining is how to mimic the validation dataset using my custom image.
The doubt is now resolved.
When using numpy.array and Image.open, the preprocess_input should not be used,
And the load_image function is correct. Thanks for @ai_curious 's answer about the perception of class.
And also thanks for @balaji.ambresh about the clarification of softmax.