I passed all tests successfully and even checked the correspondence of all numbers.
But for some reason I get nan numbers when I test the model.
I was suspecting a too high learning_rate (I see it nowhere specified by the way) but that happens even if I pass a tiny number as learning_rate.
Any idea?
Cost after iteration 0: 0.692739
Cost after iteration 1000: nan
Cost after iteration 2000: nan
Cost after iteration 3000: nan
Cost after iteration 4000: nan
Cost after iteration 5000: nan
Cost after iteration 6000: nan
Cost after iteration 7000: nan
Cost after iteration 8000: nan
Cost after iteration 9000: nan
W1 = [[nan nan]
[nan nan]
[nan nan]
[nan nan]]
b1 = [[nan]
[nan]
[nan]
[nan]]
W2 = [[nan nan nan nan]]
b2 = [[nan]]
All tests passed.
Try and debug your output for smaller number of iterations (1), to catch when NaNs start. If you are lucky, you’ll get this right from the get go, after the first iteration. This would be easy. I would guess a division by zero. You only need one to pollute everything with NaNs (could be the error at the last layer is corrupted, and then back propagated to ruin everything)
Congrats on solving most of the problem! The answer from @yanivh gets you in the right direction, here’s a little bit more along those lines.
I would also print out the cost on every iteration.
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
# # Print the cost every 1000 iterations
# if print_cost and i % 1000 == 0:
# print ("Cost after iteration %i: %f" %(i, cost))
Run the example with a small number of iterations (say 5).
You can then print the inputs to the compute_cost function by adding before the call to compute_cost:
Now look at the inputs to your compute_cost function right before it returns nan. Is there anything wrong with them? If you pass in a nan, you well get nan back.
Try calling compute_cost with these values in a separate cell, do you get nan? Walk through the cost function calculation and see if you have a division by zero or some other problem.
Hi mate, your model is not updating any parameter so it prints same cost for every iteration case. Hence, check out the updating parameter part. If this part is correct check whether you are implemented it or not for our nn_model case. Normally, what we are expecting is that first we get a big cost value from forward prop then, we would onserve the decreasing cost values after each iteration because back prop updates parameter such that cost function can reach a specific value(converge).
Thanks for the reply man.
I am not able to locate any error whatsoever.
I have backpropagated myself to the beginning of my implemented code to update it. I guess my brain needs an upgrade itself. I guess I will write the whole code again maybe.
I would investigate the loop in the nn_model further.
If the cost function is not updating, it means the parameters are not being updated.
Start by checking the cost function calculation: does changing parameter values change the cost?
To examine the loop nn_model a little closer:
I would limit the number of iterations to 5 (or something small) and print out the gradients. If the grads terms are zero, the parameter values won’t update. If that’s the case, examine your backpropagation: why is it returning zeros for the grads?
If the grads are non-zero but your parameters are not updating, then your gradient descent step is not working as expected: non-zero grads should lead to parameters updating which means the cost should update.
Hey man @petrifast
I investigated the loop in the nn_model further and the problem was…
I wrote the spelling of parameters as paramters and that is why the parameters weren’t getting updated. Very dumb of me.
The probable cause is sleep deprivation.
Thank you for replying me back.
Kshitij Sharma.