The first picture shows my code to compute the cost function, and the second one shows the error that is displayed. The value of my cost function is nearer to the answer, but it’s not accurate. I couldn’t find the bug in the code, can someone lend me a hand.

The function tf.reduce_mean already computes the mean of the tensor elements, so you don’t need to do it explicitly on your code. Also be aware of the from_logits argument needed in tf.keras.losses.categorical_crossentropy.

I’m going to remove the first image on your post to prevent the breaking of the Honor Code.

First you should put your code between the # YOUR CODE STARTS HERE and # YOUR CODE ENDS HERE comments.

Also, I recommend you to read carefully the instructions of the exercise regarding the shape of the tf.keras.losses.categorical_crossentropy arguments and the use of the tf.reduce_mean function that helps you to compute the mean over all the examples.

Hope that helps.

P.D. I removed the image on your post since it is not allowed to show your code.

tf.reduce_mean(tf.keras.metrics.categorical_crossentropy) because the link given in the instruction shows that function.
But neither of them solves my problem.

Looks like this function compute_cost is prompts to errors. I have already reshaped the parameters logits, labels to (number of examples, num_classes) and passed in the correct order to tf.keras.metrics.categorical_crossentropy() and wrapping the previous function with tf.math.reduce_mean(cost) but I still have error. My output is tf.Tensor(0.8071431, shape=(), dtype=float32) and the expected value is tf.Tensor(0.810287, shape=(), dtype=float32)