DLS C4 W1 A2 convolutional_model() stuck with dimensions

Any suggestions


ValueError Traceback (most recent call last)
in
----> 1 conv_model = convolutional_model((64, 64, 3))
2 conv_model.compile(optimizer=‘adam’,
3 loss=‘categorical_crossentropy’,
4 metrics=[‘accuracy’])
5 conv_model.summary()

in convolutional_model(input_shape)
38 Z1 = tfl.Conv2D(filters=8, kernel_size=(4,4), strides=(1,1), padding=‘same’)(input_img),
39 A1 = tfl.ReLU()(Z1),
—> 40 P1 = tfl.MaxPool2D(pool_size=(8, 8), strides=(8,8), padding=‘same’)(A1),
41 Z2 = tfl.Conv2D(filters=16, kernel_size=(2,2), strides=(1,1), padding=‘same’)(P1),
42 A2 = tfl.ReLU()(Z2),

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
→ 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1090 # TODO(reedwm): We should assert input compatibility after the inputs
1091 # are casted, not before.
→ 1092 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
1093 graph = backend.get_graph()
1094 # Use self._name_scope() to avoid auto-incrementing the name.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
178 ‘expected ndim=’ + str(spec.ndim) + ‘, found ndim=’ +
179 str(ndim) + '. Full shape received: ’ +
→ 180 str(x.shape.as_list()))
181 if spec.max_ndim is not None:
182 ndim = x.shape.ndims

ValueError: Input 0 of layer max_pooling2d_10 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [1, None, 64, 64, 8]

It looks like you made the classic mistake of adding commas onto the end of the lines when you are using the Functional Model, instead of the Sequential Model. You need the commas in the Sequential case, but they are a disaster in the Functional Case. If you say:

myOutputTensor = myLayerFunction(myInputTensor),

That comma on the end turns the output of the RHS into a “tuple” with one more dimension than the base tensor that was returned by the function.

Here’s a great thread to read for more details on how to use the two styles of layer programming in TF/Keras.

Thank you @paulinpaloalto !!
After trying everything, for commas to be the problem is quiet funny.

1 Like

Yes, it’s one of those things that seems just too easy for the interpreter to have figured out. But, we know that computers are very literal minded. Oh, well … :smile:

1 Like