I am getting this error that there should be 8 layers, but I apparently have only 5. Please help! Thank you.
ERROR MESSAGE:
<ipython-input-39-0346cb4bf847> in <module>
10 ['Dense', (None, 1), 1281, 'linear']] #linear is the default activation
11
---> 12 comparator(summary(model2), alpaca_summary)
13
14 for layer in summary(model2):
~/work/W2A2/test_utils.py in comparator(learner, instructor)
14 def comparator(learner, instructor):
15 if len(learner) != len(instructor):
---> 16 raise AssertionError(f"The number of layers in the model is incorrect. Expected: {len(instructor)} Found: {len(learner)}")
17 for a, b in zip(learner, instructor):
18 if tuple(a) != tuple(b):
AssertionError: The number of layers in the model is incorrect. Expected: 8 Found: 5```
Hey the issue seems to be you’re passing the original inputs to the base_model rather than the processed inputs. I’ll also be removing the code as it violates the honor code. I think once you fix this the issue should be done for.
hello sir . i am having the same issues this is my code can you please let me know where i messed up?
UNQ_C2
GRADED FUNCTION
def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
‘’’ Define a tf.keras model for binary classification out of the MobileNetV2 model
Arguments:
image_shape – Image width and height
data_augmentation – data augmentation function
Returns:
Returns:
tf.keras.model
‘’’
input_shape = image_shape + (3,)
### START CODE HERE
base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
include_top=False, # <== Important!!!!
weights='imagenet') # From imageNet
# freeze the base model by making it non trainable
base_model.trainable = False
# create the input layer (Same as the imageNetv2 input size)
inputs = tfl.Input(shape=input_shape)
# apply data augmentation to the inputs
x = data_augmentation(inputs)
# data preprocessing using the same weights the model was trained on
x = preprocess_input(inputs)
# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x)
# add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tfl.GlobalAveragePooling2D()(x)
# include dropout with probability of 0.2 to avoid overfitting
x = tfl.Dropout(0.2)(x)
# use a prediction layer with one neuron (as a binary classifier only needs one)
outputs = tfl.Dense(1)(x)
### END CODE HERE
model = tf.keras.Model(inputs, outputs)
return model
If you don’t keep modifying x, you “lose” a layer because x = data_augmentation(inputs) becomes a dead end. The data augmentation layer is not in the pipeline if you then turn around and set x = preprocess_input(inputs)). The previous value of x is simply dropped.