I tried to run the “neural style transfer” exercise locally in my machine and the time it took to generate the output is much longer compared to running the same code in coursera platform. I pressume this is because coursera platform is accesing a faster computing machine (GPU?) compared to the CPU I am using in my local machine. Question is, how do I get access to faster computation resources different from coursera platform? Is it possible to use a GPU-cloud type of resource? Jupyter notebook or google collab allows this type of access to faster machines? Or even locally in my machine, is there any way to reduce computation time without accesing on-line resources?
Have you seen this?
Jupyter notebooks are just a format for packaging code. The question is where you run them. As Balaji points out, Google Colab is one resource for running notebooks with GPU support. Even for free you can access GPUs, but you don’t get guaranteed access: you may have to wait until there are spare cycles that the paying customers aren’t using. It’s worth a try for free and see how it works. There are other alternatives, like AWS, and probably others I don’t know about. The advantage of Colab is that if you already have Jupyter notebooks as your platform, it’s very low friction to just try it out and get a feel for how it performs.
Thank you very much. I indeed gave it a try with google colab and worked much faster! Thanks!