C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb- problem


I have a question regarding this code that I copied here. Can you tell me how to understand which layers to refer to as the last layer? How did you choose “mixed7” as the last layer?

weights_url = “https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
weights_file = “inception_v3.h5”
urllib.request.urlretrieve(weights_url, weights_file)

Instantiate the model

pre_trained_model = InceptionV3(input_shape=(150, 150, 3),

load pre-trained weights


freeze the layers

for layer in pre_trained_model.layers:
layer.trainable = False


last_layer = pre_trained_model.get_layer(‘mixed7’)
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output

Hi @Mona_Esmaeili,

The question of how many layers to keep from the original model depends on what that model is trained to detect and how much of that makes sense for the model you want to create. There’s no fixed rule about how many layers to keep, but the general concept is that the earlier layers distinguish more coarse, general features, and the later layers get more and more specific in the features they distinguish. Depending on the situation, choosing how many layers to keep can take some guesswork and experimentation. This will get easier with practice and experience.

In this particular case, the Inception model is a image recognition model that can distinguish all sorts of image types, so we can use all of it for our dog vs cat model. That’s why we keep all the layers of the Inception model, up to and including the final layer, “mixed7”. Then we add on our additional layers to specificially pick out cat vs dog.