Doubt on assignment Gradient check Course 2 Week 1

I have a doubt on a small piece of code for Gradient check for Course2/week1

Inside function gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7, print_msg=False):

we are calling np.copy(parameters_values) for every iteration of for loop for @+ and @- , is this a optimal call ?

Hi, @salman0149.

I think np.copy was used for simplicity, not because it’s the optimal way of restoring the vector of parameters every time you nudge one to compute its gradient, if that’s what you meant :slight_smile: