Transfer learning in week2

In the lecture on transfer learning, when Mr. Andrew says

“Some fixed functions doesn’t change because you are not changing you are not training it that takes input image X and map it to some set of activations in that layer”

On way to speed up training is to precompute the features of the layer and save it to disk.

Can somebody explain what that means I am unable to understand

I think that he is talking about the case in which you start with a pretrained network and then either add some more layers to the end to adapt it to your particular problem or you “freeze” the first n layers of the network and only want to train either the new layers (the first case) or the layers past the “freeze” point (the second example), then you can make your training more efficient by not running the forward propagation through the “frozen” part of the network on every training iteration. If it’s “frozen” then it doesn’t change, right? So why bother running the forward prop every iteration? You can just “save” those output values of the “frozen” section and just use them as the fixed input to the part you are actually training (changing).