Week 2 Assignment 2 Exercise 2 Alpaca Model

I am just wondering what should we do for the first ‘None’, since there’s no instruction for this line of code.

Really all you’re doing here is following the instructions.

Did you set “base_model = …” to a Keras application of MobileNetV2, and include the required three parameters?

Then in the next line, you set base_model.trainable to false.

Hi TMosh,

Yes, I set ’ base_model = tf.keras.applications.MobileNetV2…’ with three required parameters.
After that, I set base_model.trainable to False, and then I just follow the instructions until this line.
I am confused what are we expected to input this command. I tried with ‘base_model(inputs, training=False)’, and shows an error like this ‘Input 0 of layer Conv1_pad is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 160, 160]’.


In your original post, that None should be x.

Hi Chito,
The sequence should be all correct to end up running without error, because sometimes the code runs somehow but when it comes to the test it shows something went wrong for example your output shape differs from the expected output.

In your specific question, you should feed x as an input as TMosh said, and pay attention to the previous lines.

In my case, I had left the data_augmentation without input, though it should be feed with inputs.
hope that helps!

Thanks for this! I double-checked my code and now everything works perfectly!

I also did the same, but I am getting this error
ValueError: Input 0 of layer global_average_pooling2d_29 is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 1000]

# create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=input_shape) 
    # apply data augmentation to the inputs
    x = data_augmentation(inputs)
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(x)
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(x, training=False) 
1 Like

Hi, I’ve got the same issue. Do you already have a solution?

still, dint have a clue of whats wrong here!