Yes, it’s what you said in the last sentence. They discuss this in Section 3 of the notebook, which is titled “Transfer Learning”. We start with the weights of VGG, which is an image recognition (classification) algorithm that was trained on ImageNet. But then we use the weights for something different than what they were originally intended for. We use the activations of the inner layers to extract features of both the content and style images and then merge them. We define a new cost function in order to define the relationship between the content and the style images and the “generated” image we want to produce. Then we train more with the back propagation driven by that completely new cost function that expresses our “style transfer” purpose.
1 Like