Transfer learning fine tuning

I do not understand why they add more epochs when fine-tunning, and its print just from the 4th one? and in the plots they marks in green the fourth epoch. Could someone explain this for me please?

When fine-tuning, we train for additional epochs so that the layers we want to train better adapt to our dataset. Please read the markdown 3.3 - Fine-tuning the Model as well.

As far as model2.layers[4] is concerned, look at the way the model is constructed inside def alpha_model

  1. InputLayer
  2. data augmentation which is a sequential model consisting of 2 layers.
  3. preprocess_input that adds 2 lambda layers.
  4. The actual mobilenet model which we refer to as the base_model. This is represented a Functional layer in model summary. This can be accessed via index 4.

Hope this helps.