Fine tuning and further pretrained model training when batchnorm layers are blocked

sequence model course 4, week 3, the last assignment : Trigger_word_detection_v2a

When we fit the model (see concerned section inside) it is said that when we fine tune a pretrained model, we block the weights of batchnorm layers such that they are not trainable anymore. Then, it is said that we can continue to train the model further using Adam optimizer and binary-cross entropy loss.

My question is : is fine tuning and continuing to train a pretrained model considered as being the same here ? or in general ? because we used a pretrained model where the layers of the batchnorm are not trainable (because we blocked them) how is then the model able to train actually ?

Thank you very much

Best

Yes.

Typically you’ll make the early layers not trainable, and only fine tune the final layers.