When I used a pre_trained neural network ( as a Transfer Learning approach ) and freeze all the hidden layers and keep their weights , How they can classify my data set , because it trained on a different data set
thanks a lot
When I used a pre_trained neural network ( as a Transfer Learning approach ) and freeze all the hidden layers and keep their weights , How they can classify my data set , because it trained on a different data set
thanks a lot
The model used for transfer learning acts as a feature extractor without its output layer. This is because, it’s trained on images as well. The more layers you unfreeze, the model gets to learn from your dataset as well.
The only time when transfer learning is useless when the domain is completely different. For instance, you cannot use a model trained to predict price of a house from square footage as input to classify images.
So Just to make sure that I got it , my data set in my example will be just a test set , the Neural Network will not be trained on it
Right ?
The layers that are not frozen can still be trained. The base model acts as a feature extractor to your custom layer(s).
When transfer learning, you start with freezing all the layers of the base model and train the additional layers you add. Depending on the need for performance (say, accuracy) and the ability to fine tune the base model, unfreeze layers starting from the bottom most ones all the way to the outermost layer.
When you work through the Transfer Learning assignment using MobilNet to recognize alpacas, they’ll walk you through the points that Balaji is explaining here and show how it all works. They even show you how to vary the “unfreeze point” as Balaji described to optimize the results.