Course 4 Week 2 Assignment 2 ValueError

ValueError Traceback (most recent call last)
in
----> 1 model2 = alpaca_model(IMG_SIZE, data_augmentation)

in alpaca_model(image_shape, data_augmentation)
37 # Add the new Binary classification layers
38 # use global avg pooling to summarize the info in each channel
—> 39 x = tf.keras.layers.GlobalAveragePooling2D()(x)
40 #include dropout with probability of 0.2 to avoid overfitting
41 x = tf.keras.layers.Dropout(0.2)(x)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
→ 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1090 # TODO(reedwm): We should assert input compatibility after the inputs
1091 # are casted, not before.
→ 1092 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
1093 graph = backend.get_graph()
1094 # Use self._name_scope() to avoid auto-incrementing the name.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
178 ‘expected ndim=’ + str(spec.ndim) + ‘, found ndim=’ +
179 str(ndim) + '. Full shape received: ’ +
→ 180 str(x.shape.as_list()))
181 if spec.max_ndim is not None:
182 ndim = x.shape.ndims

ValueError: Input 0 of layer global_average_pooling2d_16 is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 1000]

input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                           include_top=True,
                                           weights="imagenet") # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x)

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=True)

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tf.keras.layers.GlobalAveragePooling2D()(x)
#include dropout with probability of 0.2 to avoid overfitting
x = tf.keras.layers.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tf.keras.layers.Dense(1,activation='softmax')

### END CODE HERE

outputs = prediction_layer(x) 
model = tf.keras.Model(inputs, outputs)

You need to pass some data argument to the Dense layer - just like you did for all the other layers.

After argumenting the Dense layer - global_average_pooling2d gets incremented from 16 to 29.
Still there is a error throwing up…

ValueError: Input 0 of layer global_average_pooling2d_29 is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 1000]