Deep Learning Course 4 - Week1
2 Part issue:
- If I have tfl.ReLU()(Z1), I get the error that ReLu is not an attribute of TFL
- If I then change to tfl.Activate(‘relu’)(Z1) , I get output. However, it does not match the expected output, as shown below.
Model: “functional_3”
Layer (type) Output Shape Param #
input_5 (InputLayer) [(None, 64, 64, 3)] 0
conv2d_7 (Conv2D) (None, 64, 64, 8) 392
activation_3 (Activation) (None, 64, 64, 8) 0
max_pooling2d_4 (MaxPooling2 (None, 8, 8, 8) 0
conv2d_8 (Conv2D) (None, 8, 8, 16) 528
activation_4 (Activation) (None, 8, 8, 16) 0
max_pooling2d_5 (MaxPooling2 (None, 2, 2, 16) 0
flatten_2 (Flatten) (None, 64) 0
dense_2 (Dense) (None, 6) 390
Total params: 1,310
Trainable params: 1,310
Non-trainable params: 0
Test failed
Expected value
[‘ReLU’, (None, 64, 64, 8), 0]
does not match the input value:
[‘Activation’, (None, 64, 64, 8), 0]
AssertionError Traceback (most recent call last)
in
15 [‘Dense’, (None, 6), 390, ‘softmax’]]
16
—> 17 comparator(summary(conv_model), output)
~/work/release/W1A2/test_utils.py in comparator(learner, instructor)
20 “\n\n does not match the input value: \n\n”,
21 colored(f"{a}", “red”))
—> 22 raise AssertionError(“Error in test”)
23 print(colored(“All tests passed!”, “green”))
24
AssertionError: Error in test