What is the difference between pre-training and fine tuning in transfer learning?

Hi @Shivam_sharma_MED_07 ,

Regarding pre-trained models: You can find and download from the internet models that are ‘ready to use’ - these are pre-trained models. Using these pre-trained models will save us a lot of time if the model was trained for my exact objective. For example, if I want to classify cats in general, I can download a model pre-trained in cats, or even a model pre-trained in domestic animals that include cats.

Now, sometimes my need is more specific and I cannot find a pre-trained model that can do that. For example, lets say I want a model to classify “yellow toy puddles walking in the beach in a rainy day”. Very specific, right? For this case I have 2 options: I can either start a model from scratch, or I can download a pre-trained model on cats or domestic animals, like those mentioned above. Lets say that for this case I have about 200 pictures of “yellow toy puddles walking in the beach in a rainy day”. In this case, what I can do is take one of the pre-trained models, ‘cut’ the last layer(s) of the model, and add some new layers. In this case, I want to train only the layers I just added, and keep ‘fixed’ the previous layers (keras offers a property to freeze training per layer). In this case, I am fine-tuning the model to my very specific needs. Note that in this case, the weights of the original layers will remain the same, and only the new layers will be affected by the back-propagation.

And then we have re-training. In this case, I can download one of the models from the internet, and run the training with my training set, and for this case I would allow all layers to be updated in the backprop. In this case, your weights will start from value that have already learned a lot about features and this can be useful given that you may have a small number of samples for your new objective of “yellow toy puddles walking in the beach in a rainy day”.

What do you think?

Juan