Missing layers in Transfer_learning_with_MobileNet_v1

Hello
I am trying to complete the assignment. But, when I run Exercise 2 - alpaca_model, I get the following error message: “The number of layers in the model is incorrect. Expected: 8 Found: 5”.
How do I find the missing layers? Please help. Thank you!

Please post a screen capture image that shows the model summary. Most of the assignments include this output.

alpaca_summary = [['InputLayer', [(None, 160, 160, 3)], 0],
                    ['Sequential', (None, 160, 160, 3), 0],
                    ['TensorFlowOpLayer', [(None, 160, 160, 3)], 0],
                    ['TensorFlowOpLayer', [(None, 160, 160, 3)], 0],
                    ['Functional', (None, 5, 5, 1280), 2257984],
                    ['GlobalAveragePooling2D', (None, 1280), 0],
                    ['Dropout', (None, 1280), 0, 0.2],
                    ['Dense', (None, 1), 1281, 'linear']] #linear is the default activation

Ok, here is the model you are being compared with. So where are the differences?

Thank you for your quick response!
I have implemented the layers specified in the problem statement. For instance, TensorFlowOpLayer, Functional, etc are not specified in the problem statement. what am I missing here?

You have to do some mapping back to what was asked for. Some of the layers there are easy to recognize in the summary and map back to the code: e.g. the Input Layer and the average pooling layer and the dropout and dense layers. But the functional layer is the imported mobilnet model, right? So what you are missing is whatever came between the input layer and the base_model. What was that? Go back and look at the comments and the instructions. What is between the input and the base_model?

I see only the following in the assignment:

# create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=None) 
    
    # apply data augmentation to the inputs
    x = None
    
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(None) 
    
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(None, training=None) 

Ok, did you fill in the data augmentation and the preprocess_input layers? The data augmentation is what gives you the Sequential layer (recall that the function being called there returns a Keras Sequential Object) and it turns out the two TensorFlow Op layers are caused by the preprocess_input, although there’s no way you could know that without looking in more detail at the preprocess_input code.

So why do those not show up in your version of the model?

The other thing to note is that the “summary” of the model is just showing layers that are actually part of the compute graph that you need to get to the final output being produced. So it could be that you did include the data augmentation (e.g.) but that you did it incorrectly (with the wrong input or the wrong output) so that it doesn’t end up being part of the real compute graph. At each step you are taking an input and producing an output. Then that output becomes the input to the next layer in the process. If you get the inputs or outputs wrong, that breaks (disconnects) the graph.

thank you!
based on your hints, I found the problem, instead of giving x as the input, I had given inputs by mistake.
your timely help is appreciated.

Great news! Glad to hear you were able to find the solution based on the previous discussion. Onward! :smiley: