Hi,
I’ve read all the post i could find but I couldn’t find any help for this so I am forced to post it here.
I am having an issue with the function gradient_check.
Can you help me? I have double checked all the computations and I am quite sure that the difference is computed correctly. At this point I am having doubts on how to compute grad and J_plus/J_minus
Thanks in advance!
N.
1 Like
To compute grad, you have to call backward_propagation
function. And, for J_plus/J_minus
, you have to call forward_propagation
function.
Are you getting any errors? If so, please share that.
Hi saif!,
Yes I did that, indeed I am getting no errors! I am not sure how to deal with it.
The only message I get is
There is a mistake in the backward propagation! difference = 1.0
Maybe i can send you my chunk of code in private?
1 Like
One common mistake students make in this exercise is that they copy a code from theta_plus
and paste it to theta_minus
without changing the sign. If you are not making this mistake, you can send me your code in a private message. Click my name and message.
PS: You are doing exercise 3, gradient_check
; not exercise 4, gradient_check_n
, right?
Here is my output for Ex. 3:
theta_plus: 4.0000001
theta_minus: 3.9999999
J_plus: 12.0000003
J_minus: 11.9999997
gradapprox: 2.9999999995311555
grad: 3
numerator: 4.688445187639445e-10
denominator: 5.9999999995311555
difference: 7.814075313343006e-11
Your backward propagation works perfectly fine! difference = 7.814075313343006e-11
You may compare your result with it and see where you are making mistake.
Update:
The mistake was in computing grad
. The correct way is to call backward_propagation
only.
Thanks! All solved. I just got confused and thought that grad was alrady checking the difference while that is done when actually computing difference, whereas grad is a variable to which back_prop is assigned!
Thank you!
1 Like