Week # must be added in the tags option of the post.
Link to the classroom item you are referring to:
Description (include relevant info but please do not post solution code or your entire notebook)
Hello everyone!
I am facing some problem while computing run_gradient_descent for alpha=1.0e-1. It is coming ‘nan’ for every iterations. Can anyone help me to understand why this is happening?
This generally happens when the learning rate is too high, causing the updates to “explode” and the values to become nan. Try lowering alpha and also double-check your gradient computation.
Also, try debugging by printing some intermediate values during each iteration (e.g., cost and parameter updates) to see where things blow up. If you’ve tried this and still can’t find the issue, feel free to send me your code (private message) and I’ll help you take a look.
Hope it helps! Feel free to ask if you need further assistance.
Thank you for your suggestion. I have tried with lowering alpha (e.g. 1e-6). Then I am getting non ‘nan’ result. But it is not showing best prediction. So for best prediction we have to take lager value of alpha, but again I am facing same problem.
The lab page in the course is working properly even for higher value of alpha. I have not modify any of the code. But when I am trying to run that code in my own Jupyter notebook, I am getting ‘nan’. So can you please tell me how to correct it?