Hi! Can somebody explain why is generated_image initialized for the second time after the train_step function? So basically all the noise added before is removed and the generated image stays the same as the content image. Am I missing something? Thanks!
Instead of being randomized or made a noisy version of the content image, generated_image is here initialized to the content image. With training, style elements get introduced (to minimize the sum of the style cost and the content cost) because of which generated_image starts to diverge from the content image as is demonstrated by the resulting pictures. So this approach can also be used.