When we fine-tune our model, what does model2.layer[4] do? Why are we calling out the 4th layer in model2 and assigning it to variable base_model?
model2.layers[4]
is referring to the mobilenet model.
We want to fine-tune layers of this model starting at index 120.
Here’s the configuration of model2
:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 160, 160, 3)] 0
sequential_3 (Sequential) (None, 160, 160, 3) 0
tf.math.truediv (TFOpLambda (None, 160, 160, 3) 0
)
tf.math.subtract (TFOpLambd (None, 160, 160, 3) 0
a)
mobilenetv2_1.00_160 (Funct (None, 5, 5, 1280) 2257984
ional)
global_average_pooling2d_1 (None, 1280) 0
(GlobalAveragePooling2D)
dropout (Dropout) (None, 1280) 0
dense (Dense) (None, 1) 1281
=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984
_________________________________________________________________