Sent you the code via DM. Didn’t want to post the exercise solution here.
Also I verified that the data is the same by printing out the raw values of “images” and “labels” numpy arrays along with their dimensions. The two environments have the exact images/label arrays.
Yes that did it.
Thank you for looking into this.
Could you please point me to any literature that talks about why we are adding an extra dim.
The exercise simply asks us to add a dimension and I agree that I overlooked the fact that we should not add it to the labels.
Glad it helped.
Well, if you are talking of the target dimension that you were already adding, CONV2D requires 3 dimensions, but the 3 dims that it requires are the dims of the image, like the RGB dims, so width x height x channels. But you are starting from, in this case for instance, sample x width x height, and the sample axis (the number of examples) is not part of the single image. So you actually had a 2 dimensions set of images and you needed to transform (28, 28) to (28,28,1). The labels of course had 1 dimension and they should stay that way. I always like going to the source, when I have doubts, meaning tensorflow.org, in this case: tf.keras.layers.Conv2D Â |Â TensorFlow Core v2.7.0
Good luck with your course.