Error in c2w2 lab 1

I was going through the lab where it was to classify cats and dogs. When I got to the bottom and saw that I could upload images to test. I found that all of the images would be classified as a cat even with images with dog. Though I found that if I commented out the code : image = rescale_layer(image), images of dogs would be labeled correctly as dog and images of cats would be labeled as cats. I wonder what is going on. Could anyone try it out and see. I tried to recreate the lab using my dataset of human and robot images and ran into the same problem. If I normalize the images before predicting them they would all just say human when I give them robot images but if I don’t it would classify them correctly

Just ran inference on C2_W1_Lab_1_cats_vs_dogs.ipynb with a training image of a dog (dog1.jpg) and observed this output: dog.1.jpg is a dog.

Also, there is no expression like shown below in the notebook:

There is however tf.keras.layers.Rescaling(1./255), within the model. Please follow these steps to refresh your workspace and try again. If the classifications are still wrong, click my name and message your notebook and a few sample images as attachment.

Try this:

import numpy as np
img = tf.keras.utils.load_img(
    'cats_and_dogs_filtered/validation/cats/cat.2002.jpg', target_size=(150, 150)
)
class_names = train_dataset.class_names
print(class_names[0])
print(class_names[1])
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch

prediction = model.predict(img_array)
score = tf.squeeze(prediction)
if np.round(score):
    probability = score
else:
    probability = 1 - score
print(
    "This image most likely belongs to {} with probability = {:.2f}"
    .format(class_names[int(np.round(score))], probability)
)

Thanks, it worked. So when I send in an image for the model to predict on, I don’t need to normalize it by dividing all of the pixel values by zero or would the model already do that, if I already of a rescaling layer in the my model architecture.

Never divide anything by 0.

If rescaling is done inside the model there’s no need to do it again.

Your problem was with the interpretation of model output. Since the output layer activation function is sigmoid, it can be interpreted as probability directly. There’s no need to apply sigmoid on it again.

The point is that you just need to be consistent in the format of the data you feed to the model. When you run in inference mode, the images need to be in the same format as they were in training mode. Meaning that they are all either scaled or not scaled. If you train on scaled images (/255) and then try to run inference on raw unscaled images, then it probably won’t work very well, right?

It is very common to scale images by dividing by 255 before training, because you typically get much better convergence that way. But then it means it’s only that type of scaled image that your model “understands”, right? The good news is that most imaging libraries will automatically render scaled images correctly.