Include_top parameter and usage of custom model for Transfer Learning

Hello,

In week 3 we studied how we can use pretrained InceptionV3 model layers for transfer learning which could by easily imported using syntax
from tensorflow.keras.applications.inception_v3 import InceptionV3
But if we want to use the model which is custom built by some other author and is not a part of tensorflow.keras.applications models, how will we use or import it for transfer learning?

Also its mentioned in the tensorflow documents that by specifying the include_top=False argument, you load a network that doesn’t include the classification layers at the top, which is ideal for feature extraction. Could you please explain this ?

tensorflow.keras.applications contains code to download a model and its weights if required. See this link on saving and loading model weights. As long as you have access to model architecture and weights, you can make use of a 3rd party model.

InceptionV3 was trained on ImageNet to classify images across 1000 classes. include_top=True downloads the model with the pre-trained classification head. For our custom dataset, we therefore set include_top=False. You can learn more about transfer learning in deep learning specialization.

Okay so does it mean we use the feature extractor of pretrained model but not the classifier because we might want to train the model on different classes?

Your understanding is correct.

HI @Prachi_Chaudhary

  1. If you want to use a custom-built model that is not part of the tensorflow.keras.applications models for transfer learning, you can import the model using the load_model function from the tensorflow.keras.models module.

from tensorflow.keras.models import load_model

custom_model = load_model(‘path/to/custom_model.h5’)

This will load the custom model from a file in HDF5 format, which is a format that is commonly used to save Keras models.

  1. The include_top=False argument when loading a model using tensorflow.keras.applications module is used to specify whether to load the layers that are used for classification. When set to False, it will only load the layers that are used for feature extraction, which are the layers before the final fully connected layers. This is useful for transfer learning because you can use the feature extraction layers from the pre-trained model as a starting point for your own model, and train the final layers to perform your specific task.

For example, when you use the InceptionV3 model with include_top=False, it will only load the layers that are used for feature extraction, which are the layers before the final fully connected layers. Then you can add your own fully connected layers on top of the feature extraction layers and train them to perform your specific task.

In general, when using a pre-trained model for transfer learning, it’s a good idea to remove the last few layers that are used for the original task, and add new layers that are suited to the new task. This is because the features that are useful for the original task may not be useful for the new task. By removing the last few layers and adding new layers, you can fine-tune the model to the new task.

Regards
Muhammad John Abbas