C2W4_Assignment Grading error

Hi,

Before submitting C2W4 Assignment, I ran my model without error messages. However after I submitted the assignment I got the following grader output:

Failed test case: your model could not be used for inference. Details shown in ‘got’ value below:.
Expected:
no exceptions,
but got:
in user code:

File “/opt/conda/lib/python3.7/site-packages/keras/engine/training.py”, line 1366, in test_function *
return step_function(self, iterator)
File “/opt/conda/lib/python3.7/site-packages/keras/engine/training.py”, line 1356, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File “/opt/conda/lib/python3.7/site-packages/keras/engine/training.py”, line 1349, in run_step **
outputs = model.test_step(data)
File “/opt/conda/lib/python3.7/site-packages/keras/engine/training.py”, line 1306, in test_step
y, y_pred, sample_weight, regularization_losses=self.losses)
File “/opt/conda/lib/python3.7/site-packages/keras/engine/compile_utils.py”, line 201, in call
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File “/opt/conda/lib/python3.7/site-packages/keras/losses.py”, line 141, in call
losses = call_fn(y_true, y_pred)
File “/opt/conda/lib/python3.7/site-packages/keras/losses.py”, line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File “/opt/conda/lib/python3.7/site-packages/keras/losses.py”, line 1665, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File “/opt/conda/lib/python3.7/site-packages/keras/backend.py”, line 4994, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)

ValueError: Shapes (None, 1) and (None, 26) are incompatible

It looked like coding error. I ran my model again and again, no error messages popped out. I lost 11% because of this. Could anyone explain what happened?

Thanks.

From the error output I understand that there is a misshape between prediction and labels so when the loss is computed it throws this error.

Thanks for your quick response.

While receiving no error messages, how can we learn about this before submitting the assignment?

I ran the model separately and added “model.evaluate()” and I also received error messages related to shaping. Could you suggest how we could fix this?

Thanks.

You could check the shape of the predictions i.e. print their shape. If their shape is not the same as of the labels then there is a problem and you should trace back where that problem occurs, probably upper stages of code implementation. The print function is always very useful in debugging.

Similar issue, but model.predict works in Collab.
Model trains and gets to accuracy 97.8% and val_accuracy 99.4%

FWIW pointing out that model.predict does make inferences in Collab
Added the upload code from other course labs to upload images to collab
Screenshot shows the prediction vectors from uploaded images.

Actually, these predictions are not very good. S and T are predicted the same AND U and V also have the same vector.

Clearly uploading a jpeg, converting to greyscale, and making it a vector can be used for model prediction. Not sure about the Grader report.

Grader reported

Failed test case: your model could not be used for inference. Details shown in 'got' value below:.
Expected:
no exceptions,
but got:
in user code:
...
    File "/opt/conda/lib/python3.7/site-packages/keras/backend.py", line 4994, in categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)

    ValueError: Shapes (None, 1) and (None, 24) are incompatible

Here’s feedback based on your notebook:

As far as the assignment is concerned, don’t one-hot encode labels. Use the sparse version of the loss function to match the expected outputs.

The maximum value of the label is 24. So, there are 25 classes (endpoints inclusive). I recommend leaving the classes at 26. Don’t break your head on a single wasted label.

print(np.unique(training_labels))
array([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8., 10., 11., 12., 13.,
       14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24.])

Here’s a snippet you can use to test images:

import numpy as np
from google.colab import files
from keras.preprocessing import image
import string

uploaded = files.upload()

for fn in uploaded.keys():
 
  # predicting images
  path = fn
  img = image.load_img(path, target_size=(28, 28))
  img = img.convert('L')
  x = image.img_to_array(img) /255
  x = np.expand_dims(x, axis=0)
  classes = model.predict(x)
  print(f'Predicted class is {string.ascii_lowercase[np.argmax(classes)]}')

Use sparse loss. For reasons unexplained in the course, the dense version of Losses don’t work. I was losing my head over this as well.

I have the same issue with Grading for C2W4_Assignment.
I prepared my programming assignment in COLAB. Both submissions are working. The exception stated in Grading is generated in COLAB only if the function validation_datagen.flow takes as y parameter plain array (y=validation_labels). As it is stated in tf.keras.preprocessing.image.ImageDataGenerator | TensorFlow v2.11.0 the y parameter must be passed as y=tf.keras.utils.to_categorical(validation_labels,26) . Please confirm!

I think there are 2 ways to handle multi-class classification:

  • 1, One hot encoding the labels, like y=tf.keras.utils.to_categorical(validation_labels,26) and using loss = ‘categorical_crossentropy’ when compiling the model or
  • 2, Keeping the labels as is and using loss = ‘sparse_categorical_crossentropy’ when compiling the model

In Colab both will work. But I assume the unit tests in Grading are written on a way that it only accepts the 2nd solution.