C4 W2:Transfer Learning with MobileNetV2

Hi
In Exercise 3, Line 1, base_model = model2.layers[4], I can’t understand why we used 4 as an index for model layers. I would appreciate it if someone could explain it.

Look at the output in the previous cell that shows the “summary” of model2:

All tests passed!
['InputLayer', [(None, 160, 160, 3)], 0]
['Sequential', (None, 160, 160, 3), 0]
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0]
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0]
['Functional', (None, 5, 5, 1280), 2257984]
['GlobalAveragePooling2D', (None, 1280), 0]
['Dropout', (None, 1280), 0, 0.2]
['Dense', (None, 1), 1281, 'linear']

So you can see that index 4 is that Functional layer. If you then go back and compare that output to the logic that defines alpaca_model, you’ll see that neatly maps to the return value of this call in the code:

base_model = tf.keras.applications.MobileNetV2(input_shape= input_shape, 
                                                   include_top=False, # <== Important!!!!
                                                   weights='imagenet') # From imageNet

Study the logic and look where that base_model actually lands in the compute graph.

1 Like