About convolutional network week 4 - Neural Style Transfer, assignment

I don’t understand what you mean by that. The generated image is an output, right? It’s not the input to anything. Do you mean you started with a noisy version of the Content Image?

The first thing to check for is “versionitis” problems. If you are taking the code from any of the course notebooks (especially those using TF/Keras) and running them in your own environment or on Colab, there is a whole field of landmines to step on there. The course material uses versions of TF and the various packages that are > a year old at this point. Things change in this space pretty quickly. Here’s a thread which discusses some ways to duplicate the environment.

The other general point here is that the way this algorithm works is that the more iterations you run, the further you are getting away from the original content image, right? So I wouldn’t expect it to “converge” more in the direction of the original content image if you run more iterations. If it’s already weird, running more iterations will make it weirder. This “style transfer” assignment is sort of different than a lot of the others in that there is not a defined “correct answer” that we’re trying to optimize toward. This is all just a matter of aesthetics: what do you think looks cool? And the answer may be different for different people.

The other high level point is that the results may matter a lot depending on whether you are training from scratch every time or whether your are continuing the training based on the previous training you have already done. Of course “from scratch” in this context means starting from the pre-trained VGG weights as the starting point.

None of the above is really an “answer”, but just some suggestions for further conversation/investigation.