Compilation takes forever

Whenever, I re-run the code, it takes forever to complete. I spent 3 days on C2W2 and most of my time was wasted on waiting for the code to finish compiling.
I tried to connect to GPU backend but was unable to. I got the message indicating that I need to upgrade to Colab Pro.
Is there a way to speed up the process without dropping the accuracy?

Please click my name and message your notebook as an attachment.

Thank you for messaging your notebook. There’s nothing in the notebook that should take you 3 days to run your code.

Google colab has a free tier. So, if you want a higher chance of getting a GPU, consider upgrading to a paid tier.

It’s ok to use your desktop / laptop to write this code as well. Be sure to use the same version of tensorflow as on the colab environment.

Thanks for your reply. I apologize for the miscommunication.
The 3 days duration accounts for the whole process of changing the code and re-running it. The time for running the code once is approximately a 1.5 hour to 2 hours.
The problem I am having is that I can’t reach the 80% accuracy. Whenever, I make changes and re-run the code it takes very long. It is becoming frustrating to complete the assignment. Do you have any hints, for example which parameter to look at?
Thanks in advance

Sure. Here are some hints:

  1. Please grab a machine with GPU for quicker results.
  2. As far as the architecture is concerned, start from a smaller NN and add layers / nodes when you don’t achieve the desired accuracy. See ungraded labs for the week for examples.
  3. Have number of conv filters as powers of 2 and increase them as you go deeper in the network.
  4. For optimizer, I recommend adam unless you are willing to spend time on tuning the learning rate.

Could you please clear my one doubt, so there are many optimizer and I am not sure when we should select Adam, RMSprop, Adadelta, Adagrad, and SGD. Could you please give me some suggestion which will be helpful for me during the Tensorflow certification exam.

Choice of optimizer and its learning rate are hyperparameters of a NN. Please see this for learning to tune a NN.