Course4, Week1, Programming Ex2: ReLU function adds one extra dimension!


I​’m implementing convolutional_model() function and I face this error while I try to use MaxPooling:
Input 0 of layer max_pooling2d_15 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [1, None, 64, 64, 8]

T​he error comes from the relu function which I applied on Z1. The shape of Z1 is (<tf.Tensor 'conv2d_16/BiasAdd:0' shape=(None, 64, 64, 8) dtype=float32>,) while it becomes A1.shape = (1, None, 64, 64, 8) after using tfl.ReLU()(Z1). So the shape is changing from ndim =4 after conv layer to ndim = 5 after relu layer. I have tried different ways (even tf.keras.layers.Reshape) but I couldn’t solve the problem. I would appreciate if you could help me on this error I get.


ReLU is an activation function that doesn’t change the dimension of the output.

Seems like the shape property is incorrect when it comes to input_img. You don’t need to worry about batch dimension when specifying the shape.

The shape of input image is passed as a touple (64, 64, 3) into the function. it will be (None, 64, 64, 8) after the first conv layer but, the ReLU activation function generates a tensor of shape (1, None, 64, 64, 8) which is not desired and cannot be passed to next layers. I cannot find out how to keep the dimension as it was after the first conv layer.

It seems one extra dimension is added after each time of using the ReLU activation function.
[removed code]

Please click my name and message your notebook as an attachment.

1 Like

There were a bunch of trailing commas in the code.

Thanks for sharing what you found there. Yes, you have to be really careful when you switch from the “Sequential API” where the commas are required, to the “Functional API” where they can have very nasty side effects!

They don’t give us that much instruction on these APIs in the course material. It’s worth having a look at this thread from ai_curious which gives a more detailed explanation of how to use these constructs.