Course 2 , week 1, assignment 3, exercise 4

Hello ,
Here in this exercise I have written this code (according to the explanation):
theta_plus = np.copy(parameters_values)
theta_plus[i] = theta_plus[i] + epsilon
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary( theta_plus ))

but I have wrong value error. I wanna know is my code correct? then I have to find the problem somewher else
thanks

That looks correct as far as it goes, but there is a lot more to that function. What results to you get? Please show us the output and that might give some clues.

this is my output:
There is a mistake in the backward propagation! difference = 1.0

AssertionError Traceback (most recent call last)
in
6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
----> 8 assert np.any(np.isclose(difference, expected_values)), “Wrong value. It is not one of the expected values”

AssertionError: Wrong value. It is not one of the expected values

From the equation to calculate “difference”, the value of “1.0” means that the value of “gradapprox” is most likely 0, since “grad” is not changed in this routine and should have a certain value.
So, the first thing to do is to check whether “gradapprox” is updated properly or not.

Yes, that clearly that means your code is incorrect. Now you just need to figure out the problem using the good suggestions from Nobu. Note that the “difference” logic is the same in the 1D case earlier in the notebook, but the test cases there are pretty weak. They don’t even show you the “expected value”. What did you get for that answer? Correct code should give this:

Your backward propagation works perfectly fine! difference = 2.919335883291695e-10

If you get something different, then please also check that code.

My answer is 1. Should I change the previous codes? or the problem is in the last code ?

If you get 1 as the difference in either the earlier 1D case or the 2D case, that means the relevant logic is incorrect and needs to be debugged. But in both cases the error will be in the gradient_check or gradient_check_n function logic, not in anything earlier.