C4 W1 A2:Max Pooling 2d Error, understanding dimensions, where am I getting a 5D tensor

I am trying to understand the following error in the Week 1 Functional API Exercise 2 Convolutional model:

Input 0 of layer max_pooling2d_36 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [1, None, 64, 64, 8]

I am unclear why I am getting this dimension error. I suspect it has to do with the first or second dimension in the shape received: [1, None, 64, 64, 8]. But it is a mystery to me why this second dimension is appearing. Can you please help?

Note, the error occurs on the line for P1 and the original input to the model has has shape=(64,64,3)

My code:
{moderator edit: code removed - not allowed on the forum}

I don’t expect an answer, (although I wouldn’t mind a hint), but at least can you please help me understand why this error is being thrown?

I tried tracing out the dimensions of the various stages. When I use, for example, print(input_img.shape), I get the following:

input img shape: (None, 64, 64, 3)
Z1 shape: (None, 64, 64, 8)
A1 shape: (None, 64, 64, 8)
P1 shape: (None, 8, 8, 8)

(I get an error when doing print(Z2.type) which says ‘tuple’ object has no attribute ‘shape’) so I stopped there with the shapes.)

This is already confusing to me, because the error indicates a 5D tensor is being passed somewhere along the line, but when I run it step-by-step, everything stays 4D.

Please do not simply say: “read the API”. I have done that, and find it not useful to understand this error.
Excluding the input call, I have three API calls by the time the error is thrown. Conv2D, ReLU, and MaxPool2D.
For one, the API says that the input for Conv2D is a 4D tensor and the output is a 4D tensor. The input and output for ReLU have the same shape. and the input and output for MaxPool2D is a 4D tensor.

Any help is appreciated.

The problem seems to be in the ReLU call. If I run the model so that the output is A1, and comment out P1 through F, then I get the following summary:

input_53 (InputLayer) [(None, 64, 64, 3)] 0


conv2d_61 (Conv2D) (None, 64, 64, 8) 392


re_lu_47 (ReLU) (1, None, 64, 64, 8) 0

So my question is, why is reLU adding a dimension at the beginning?

The first thing I noticed is that you’re not supposed to post your code on the forum. That breaks the code of conduct.

Second, every line of your code ends in an unnecessary comma.

The commas are only used in the Sequential model, when you’re creating a python list.
Here, you’re not making a list.

All of the shapes in your original post are correct.

Update:
When I tried to run your code, I got a syntax error in Z1 for the closing quote around “same”.
So I changed all your double-quotes to single-quotes.

Then I was able to replicate your error for the “P1 = …” line.

Then deleted all of your extra commas, and error disappeared.

Thank you for your help. I am sorry for posting my code. I think it was the commas that was throwing the error, as you correctly saw. Strange that adding a comma changes the dimension. I will worry about that mystery some other time.

Thank you!