W3A2- Trigger Word Detection -Block Training for BatchNormalization Layers

In the Trigger Word Detection Assignment, the section 2.2.1 “Block Training for BatchNormalization Layers” mentions that “If you are going to fine-tune a pretrained model, it is important that you block the weights of all your batchnormalization layers.”

Correct me if I am wrong, “blocking the weights of all your batch normalization layers” means make all the back normalization layers’ weights NON-trainable, right? Why do we want to do that when we are fine tuning?

Were you able to answer the question?