Why does neural style transfer take too long to generate

Hello I am currently doing assignment of week 1 of course 4 Generative Deep learning with Tensorflow.

Although I did not come across any issue during assignment but neural style transfer for generated image took almost more than 30 minutes. Only 10 epochs were assigned with each epochs having 100 steps. I wanted to know why does it take too long to generate a style transfer generated image? What are the various factors contributing it too take too long, can it be learning rate?? because as I remember we used Adam optimiser it converge faster compare to other optimiser? Can weight also be an issue for transfer to run slow??

My question might be dumb, lame or idiotic, but I have this doubt; so I am asking :slight_smile:

Thank You

It is mainly because of the complexity of the tasks the model has to go through and complex loss calculations!

1 Like

This is not a Dumb question :slight_smile: It’s a really difficult question to answer.

The performances is affected by different points.

1-We are using free version of Colab, with only one T4 GPU.
2-The size of the image can affect a lot.
3-The size of the model, the Inceptionv3 have 24 millions of parameters.
4-Even the complexity of the style can affect. Because a complex style may need more more iterations to achieve the desired result.

Work with images in deeplearning is expensi¡ve in time and GPU consuming ;-).

p.d. maybe the next revolution will be quantum computing.


Thank you for replying. From what I have know quantum algorithm works by using interference and superposition to amplify the probability of getting the expected output when doing a measurement of your qubits. Do you really think it can replace GPUs especially in training complex model???

Thank you for replying, so basically to extract and transfer features of different images neural style transferring is done. It basically enhances the images.

in model training, when they add convolution layers, they crop pixel by cropping or reduce the spatial dimension by max pooling which again goes through different layers of batch normalisation, dropout, to get an accurate model. Why can’t rescaling replace neural style transfer then?

Neural style transfer is creating a new image by using styles and contents from 2 different sources, that cant be done by rescaling alone.

1 Like


I was just joking. I think that Quantum computing by the moment is not an alternative for us, and I have no experience wit it.

If we want something better that a GPU we can use TPUs from Colab or Kaggle, i used it to train a GAN and the results where good.

If You want You can give a try, and adapt it for the transfer style notebook.


I am tube light at PJs :joy: I thought you were seriously suggesting. I will surely go try this.

Thank You