OlaOyo
September 21, 2023, 4:26pm
1
Hi Deep learning community,
In week 1, programming assignment 3, (Gradient_Checking), on implementing the function
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7, print_msg=False)
I get the following error
Any suggestions will be much appreciated @ balaji.ambresh
As shown in the equation from the notebook, you should consider all components when calculating the difference and not just the dimension you changed:
difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 }
I’ve just encountered this same issue but still confused with this. Can you please explain a little bit more?
OlaOyo
October 4, 2023, 12:59pm
5
OlaOyo:
def gradient_check_n
@Panas_Rattanasuwan while the numerator follows
np…(value_a, value_b), the denominator has them as in the example
np…(value_a) + np…(value_b)
Please confirm which part is unclear to you.
I’d like to know what you mean by “all components”. Is grad (and gradapprox) itself the whole component?
Here’s my output error. Seems like there are others posting this exact error too I’ll check it out.
Thanks for your reply. I’ve already found the mistake. It was just the missing of parentheses. @OlaOyo
Hi,
I am facing the same issue and have tried all the solutions which has been provided but still I am unable to get my output. Can someone please help?
Please click my name and message your notebook as an attachment.
Please fix the mistakes in the line theta_minus[i]
in function gradient_check_n
to pass the tests. You’ve implemented theta_plus[i]
correctly.
Hi Balaji,
Thanks for pointing my mistake, I am now able to execute code block correctly.