W1A2 error in notebook

happy_model = happyModel()

I believe happymodel is implemented correctly still part of subsequent notebook fails, see below:

Print a summary for each layer

for layer in summary(happy_model):

print(layer)

output = [[‘ZeroPadding2D’, (None, 70, 70, 3), 0, ((3, 3), (3, 3))],

        ['Conv2D', (None, 64, 64, 32), 4736, 'valid', 'linear', 'GlorotUniform'],

        ['BatchNormalization', (None, 64, 64, 32), 128],

        ['ReLU', (None, 64, 64, 32), 0],

        ['MaxPooling2D', (None, 32, 32, 32), 0, (2, 2), (2, 2), 'valid'],

        ['Flatten', (None, 32768), 0],

        ['Dense', (None, 1), 32769, 'sigmoid']]

comparator(summary(happy_model), output)


AttributeError Traceback (most recent call last)
in
1 happy_model = happyModel()
2 # Print a summary for each layer
----> 3 for layer in summary(happy_model):
4 print(layer)
5

~/work/release/W1A2/test_utils.py in summary(model)
30 result =
31 for layer in model.layers:
—> 32 descriptors = [layer.class.name, layer.output_shape, layer.count_params()]
33 if (type(layer) == Conv2D):
34 descriptors.append(layer.padding)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in output_shape(self)
2177 “”"
2178 if not self._inbound_nodes:
→ 2179 raise AttributeError('The layer has never been called ’
2180 ‘and thus has no defined output shape.’)
2181 all_output_shapes = set(

AttributeError: The layer has never been called and thus has no defined output shape.

I wouldn’t want to give away too much. The error message is misleading in my opinion. But if you think about it: How should tf know what the dimensions of the layers should be, if it doesn’t know how the input is designed? :wink:

yes, but the problem is that the notebook only asks you to implement happy_model and then has a cell which tries to print every layer, resulting in the error above

There is something you can do in the implementation to make the code work. Did you use the input size in your model implementation?

ok, adding an InputLayer fixes it.
Still the instructions need to be fixed.
These explicitly tell you which layers to instantiate but make no mention of an InputLayer.
This is very confusing.

I also added an input layer, but if you look here in the discussions you‘ll find there is another way by giving the first layer an additional argument (~input_layer). So technically the instructions are sufficient and it‘s even nicer to be pushed to research the API! :relieved:

1 Like

Hey,
I disagree because the input_shape parameter is not mentioned on the documentation page of the first layer.

I agree, it’s confusing.

1 Like