Signs Prediction CNN

Whats wrong with this code? Implemented as per the guidelines. but still throwing error as
"TypeError: init() missing 1 required positional argument: ‘kernel_size’ ". I am lack of TF knowledge. it would be better if you could include one session of explaining its methods and parameter arguments to smoother implementation of model in TF.

In the second MaxPooling layer, you are missing the (A2) argument.

Please check the details of your code closely before you post on the Forum. It will save you a lot of time.

Please edit your post and remove the code. That clears the course Honor Code.

1 Like

Thanks Mosh. I corrected that as well. but i got error in Test where the test case expected Relu layer but i parameterized the tensorflowoplayer. How to debug this error?
Model: “functional_5”

Layer (type) Output Shape Param #
input_4 (InputLayer) [(None, 64, 64, 3)] 0

conv2d_6 (Conv2D) (None, 64, 64, 8) 392

re_lu_4 (ReLU) (None, 64, 64, 8) 0

max_pooling2d_5 (MaxPooling2 (None, 8, 8, 8) 0

conv2d_7 (Conv2D) (None, 8, 8, 16) 528

tf_op_layer_Relu_2 (TensorFl [(None, 8, 8, 16)] 0

max_pooling2d_6 (MaxPooling2 (None, 2, 2, 16) 0

flatten_3 (Flatten) (None, 64) 0

dense_3 (Dense) (None, 6) 390
Total params: 1,310
Trainable params: 1,310
Non-trainable params: 0

Test failed
Expected value

[‘ReLU’, (None, 8, 8, 16), 0]

does not match the input value:

[‘TensorFlowOpLayer’, [(None, 8, 8, 16)], 0]
AssertionError Traceback (most recent call last)
15 [‘Dense’, (None, 6), 390, ‘softmax’]]
—> 17 comparator(summary(conv_model), output)

~/work/release/W1A2/ in comparator(learner, instructor)
20 “\n\n does not match the input value: \n\n”,
21 colored(f"{a}", “red”))
—> 22 raise AssertionError(“Error in test”)
23 print(colored(“All tests passed!”, “green”))

AssertionError: Error in test

I recommend you restart the kernel and run all of the cells in the notebook again.
Then inspect all of the output cells for any error messages.

If you get the same error, then there might be a mistake in one of the ReLU layers.

1 Like

Hi Tom,

I would like to know what is the difference between actual implementation of conv2D layer with relu as below
and the implementation from our assignment where we activate the linear to relu activation? What is the take away from this way of implement?


The two examples you show are using two fundamentally different Keras “APIs”. The first uses the “Sequential API” and the second uses the “Functional API”. This was covered (although not in as much detail as one might wish) in the second exercise is Week 1 of ConvNets. When using the “Sequential API”, you just give a list of functions (separated by commas) and you don’t have to explicitly state the inputs and outputs: the input of each layer is implicitly the output of the previous layer.

With the Functional API, things are more flexible, but you have to explicitly show the inputs and outputs of each layer. The point of the Functional API is exactly that: you can express more flexible architectures that have things like “skip layers”.