Create a new model using tranfer learning. It works.... but why?

Hi All…

In the notebook every things works well, and I tried also to select different layer that ‘mixed7’.

It works… but I not well understand how it works…

This is the code to select a layer (the final layer):

last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output

In this way I selected ONLY ONE layer. Only one layer has been selected, I suppose, not a subset of pre_trained_model…

After this, the author of the notebook append the layer to the pre_trained_model using this:

# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)                  
# Add a final sigmoid layer for classification
x = layers.Dense  (1, activation='sigmoid')(x)           

# Append the dense network to the base model
model = Model(pre_trained_model.input, x)

I’m not very skilled in python, but reading this code I suppose that the layer ‘x’ is appended to the layer ‘pre_trained_model.input’. So… the model called ‘model’ should only consist of two layers…

But calling model.summary(), I have the big model coming from InceptionV3 with my final step with Dense layer after ‘mixed7’…

Every things works fine… but I can’t figure out why… :smiley:

Can you help me?

this line creates a model with input = pre_trained_model.input (the first layer) and output = x (the last layers). There are many layers in between these 2 as well as you noticed.

Ok, just a last question:

in pre_trained_model.input is there some sort of linked list related to the other layers in the pre_trained_model?

Im not quite sure of the intricacies but it starts with pretrained model’s input and ends with your last defined x, inlcuding all layers in between.