Can I use same pre-trained model for other objects recognition?
Pre-trained model - is actually information about weights?
I tried such layers
mixed7 - accuracy Epoch 1-20 90% - 96%
mixed9 - accuracy Epoch 1-20 90% - 96%
mixed5 - accuracy Epoch 1-20 70% - 92%
Am I correct that the best is to use last mixed layer? What is the rule to choose layer.
Hi Taras,
‘Pre-trained model - is actually information about weights’ - that’s correct.
The answer to remaining question is of-course ‘it depends’. It depends on what the model was pre-trained on and what other objects are you trying to recognize.
If there is a good overlap or the datasets are similar, then it’s probably a good idea. To give some examples, if the model was previously trained on images of cats and dogs and you use it to recognize airplanes then it’s probably not a good idea. However, the following pairs will probably make sense: {pretrained on: image-net, other-object: almost anything which overlaps with the category of image-net}, {pretrained on: air-planes, other-set: ships, probably will work to some extent as certain characteristics overlap like windows, edges etc.}, {pre-trained on: sound, other-object: gravitation waves, probably will work to an extent as base features are likely to overlap}.
Transfer learning will not work at-all if the features learnt by the base layers adds no value in the new problem set.
This paper goes into depth in this topic.
Which layer to choose depends on the overlap between the pre-trained model dataset and the new target dataset. If the overlap is small, then only the initial layer weights are useful, if the overlap is strong then the weights more layers can be used (frozen).
Hope this helps.