Hi ! I am actually stuck in the gradient checking.

I have already check that:

- theta_plus and theta_minus are the same copy of parameter_values with respectively + epsilon and - epsilon
- J_plus[i], _ is equal to the forward_propagation_n of X, Y and theta_plus (theta_minus for J_minus)
- gradapprox[i] is inside the for loop and contains J_plus[i] and J_minus[i]
- used np.linalg.norm for the difference.

Do you have any insight ? Thx