Hi there,
I am unable to find the bug in my code for the “convolution_model” function assignment.
For my second ReLU layer is outputting the shape (1, None, 8, 8, 16) instead of (None, 8, 8, 16), which is causing an error in the tests as it is the wrong shape.
I have read the docs and tried adjusting various parameters of the ReLU layer and the layer before but I have been unable to solve it. The definition of this layer is A2 = tf.keras.layers.ReLU()(Z2)
.
It is confusing as according to the docs the output shape of a ReLU layer is supposed to match it’s input shape, and the input to the ReLU (i.e. the output of the previous layer) is the correct shape of (None, 8, 8, 16).
A potential clue as to what the problem might be is that although it has the correct shape on the output (screenshot below)the object returned from the previous conv2D layer (object ‘Z2’) is for some reason a tuple and so has no “.shape” property, whereas in all the documentation for conv2 layers (and the previous conv2D layer in my own code) the output should have a shape property, i.e. should not be a tuple. So perhaps the problem lies with that prior layer, as I cannot see a problem with the ReLU layer. I cannot understand why the Conv2D output is in that format and I can’t find anything in the docs that would explain it so feel I am missing sometime. This layer is specified by Z2 = tf.keras.layers.Conv2D(filters = 16, kernel_size = (2, 2), strides = 1, padding = "SAME")(P1),
All other layer shapes are correct and the tests pass for the previous layers (including the conv2d layer in question)
Many thanks in advance for any help.
edit - code snippets removed
ps i have attached the 2 errors I get below: