Can I use a transfer learning model anywhere?

Suppose I download a resnet model from and use it with tensorflow. Now I can exclude the top (output) layer and add my own output layer with 100 units that is related my business (lets call it some image classification app). and train it on that. What are the chances it will work on that?

Also is this process what we call fine tuning?

Yes, this will work. you can reference the resnet model available in the tensorflow hub in your tensorflow model, re-train the output layer and keep the remaining layers fixed. You can also open up a few more layers below the output layer, keep the rest of the layers fixed, and re-train it allowing the new model a little more flexibility to adjust its parameters and get better tuned to your set of images.

Hello @tbhaxor, Just to add a little on what @shanup has said. The process is called Transfer Learning, You freeze the features of the pre-trained model and now you just train the classifiers of the model

1 Like

Ohk to confirm again, you mean “any” model trained on image classification dataset will work for any image classification problem related to any business.

According to me it should not work well because the features extraction in the CNN layers which are usually freezed will of that dataset on which it was initially trained.

What does fine tuning mean then?

Fine tuning is well described in this thread. Please have a look, @tbhaxor.

Best regards
Christian

1 Like

Lets take an example:

You have a pretrained model that was trained on animal images…dog, cat, lion, tiger etc. If you take this model and use it to classify different breed of cats, then the pretrained model can be fine-tuned by keeping all the layers fixed and training the output layer. In some cases, we could think of opening up one or two of the fully connected layers preceding the output layer and training them as well (if required) - This will not be an expensive or heavy time consuming activity.

On the other hand, if you are going to use this pretrained model to classify buildings which the model was never trained on, then you would have to unfreeze a lot more layers and retrain it. How many layers back you would need to open up is something that will need to be experimented. However, the more layers you start opening up, the training will take more time and incur more cost, and you might need to provide more training data.

1 Like

I agree with you, @tbhaxor.

Transfer Learning works well, e.g. in applications of:

  • if you learn a transfer of style
  • if you have (implicit) representative features learned in the pre-trained model which help in the fine-tuning, see also @shanup’s excellent example above!

In the end a deep learning model can learn what is contained in the data. If the data is sufficiently representative, compared to the new specialised tasks, transfer learning should be worth a look! Then you can leverage the pre-trained model (which is usually also super expensive and difficult to create in the first place).

Best regards
Christian

1 Like

What I am taking from this thread is

  1. Fine tuning is when to take a pre-trained model from remote repository like tfhub, remove output layer, freeze the parameters, attach the output layer based on the project configuration along with 2 3 dense layer if required. Now after training it will work similar kind of data only.
  2. Transfer learning is simply when you download and plug model, fine tuning is special case of transfer learning where the output layers is trained on some similar kind of dataset, not entirely new set of data.