I have just completed the Logistic Regression with a Neural Network assignment. At part 6 (Further analysis), there is a code block that tries 3 different learning rates. It says this should take around 1 minute, however, it has been running for more than 5 minutes before finishing.
Is the computation run on the Coursera servers or is it done locally? If it is local, are there any ways to have hardware acceleration speed up these computations?
In addition to Tom’s point about things that could slow down your code, my observation is that execution speed of the notebooks varies quite a bit in general. My guess is that this is a function of how loaded the servers are at a given point in time, although I don’t think we have any external way to monitor that other than the observed speed. Maybe you could create a diagnostic block that takes at least a few tens of seconds of cpu time to run that is instrumented to compute both elapsed time (using time.time()) and process cpu time (using time.process_time()) and see if you can use that to diagnose the “weather” on the servers.