Week 01 - Gradient Descent Checking - Exercise 04: Gadient_check_n

Hello
I am refereeing to function: def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7, print_msg=False):

My code was corrected, but when I try to make some modification (because I believe they must be the same but shorter) however it gives error message.

The correct version:

for i in range(num_parameters):
        theta_plus = np.copy(parameters_values)                               
        theta_plus[i] = ***
        J_plus[i], _ = ***

        theta_minus = np.copy(parameters_values)   
        theta_minus[i] = ***
        J_minus[i], _ = ***

The theta_plus and theta_minus, as I understand, are fixed and does not change inside the loop. What I do is replacing both theta_plus and theta_minus by some theta_general (cuz they copy the same thing) and move theta_general out of the loop. However, this modified version returns error.

Can some one help me explain what is wrong in this please?
Thank you

I think you’re missing a fundamental point here: you are modifying one element of each of those variables (arrays) each time through the loop, right? So on the next iteration, you’d better restore the initial value of the element you modified in the previous iteration. That’s why they do the np.copy each time: to start from the same initial state.

2 Likes

Thank you so much
I got the point
Phan