Transfer Learning - layers


I am a little confused about the following code. Are we basically just choosing one layer from all the model? Or, is it up to this layer?

also, why we are not using Sequential anymore?

Choose mixed_7 as the last layer of your base model

last_layer = pre_trained_model.get_layer(‘mixed7’)

print('last layer output shape: ', last_layer.output_shape)

last_output = last_layer.output

model.get_layer returns reference to 1 layer.

Using functional API is better as it provides more flexibility like branching and sharing of layers.

also about the mixed7 layer, if you look at the architecture of the inceptionv3 model you just used:

mixed7 is a layer somewhere before the end of the model, so as you know in transfer learning you load the model and make your own modifications on some last layers to tune them to your desired output. So it was chosen that the last layer you use in the inceptionv3 is the mixed 7 and after that you complete you neural network as you wish. Finally the fact that you choose the output of mixed7 and feed it as an input of some other layer can only be possible using the functional API not the sequential.