I do not understand why they add more epochs when fine-tunning, and its print just from the 4th one? and in the plots they marks in green the fourth epoch. Could someone explain this for me please?
When fine-tuning, we train for additional epochs so that the layers we want to train better adapt to our dataset. Please read the markdown 3.3 - Fine-tuning the Model
as well.
As far as model2.layers[4]
is concerned, look at the way the model is constructed inside def alpha_model
- InputLayer
- data augmentation which is a sequential model consisting of 2 layers.
-
preprocess_input
that adds 2 lambda layers. - The actual mobilenet model which we refer to as the
base_model
. This is represented aFunctional
layer in model summary. This can be accessed via index 4.
Hope this helps.