Course 2, week 3 compute_total_loss

when I run test, I get shape of labels as (4, 2) (and they are one-hot) and logits as (6, 2). Basically, those are the two tensors:
[[0. 1.]
[0. 0.]
[0. 0.]
[0. 0.]], shape=(4, 2), dtype=float32)

[[ 2.4048107 5.0334096 ]
[-0.7921977 -4.1523376 ]
[ 0.9447198 -0.46802214]
[ 1.158121 3.9810789 ]
[ 4.768706 2.3220146 ]
[ 6.1481323 3.909829 ]], shape=(6, 2), dtype=float32)

The shapes do not match so my tf.keras.losses.CategoricalCrossentropy command fails. How can they mismatch so? Any hint? Thank you so much!

Most likely that means you are “hard-coding” the dimension somewhere either in that function or in the one_hot_matrix function to be 4. That’s probably more likely, since you don’t change the dimensions of the labels and logits arguments. But the test case is generated by calling one_hot_matrix, right? It turns out you can pass the test case for one_hot_matrix by hard-coding the dimension to 4.

Notice that it’s obvious that the labels tensor is wrong, because the first column is not a one hot vector.

Yes, I found the problem. Thanks!
I had to check previous functions (even though they passed testing), not current function.

It’s a good point that the tests for the previous function should catch that error. I’ll file an enhancement request about that.