Np.linalg.norm does not count L2

Hello everyone!
RS2w3/gradient-checking/lab
My np.linalg.norm(grad-gradapprox) only calculates L1 and I get:
There is a mistake in the backward propagation! difference = 1.0
when I write np.linalg.norm(grad-gradapprox,2)
I get:
ValueError Traceback (most recent call last)
in
1 x, theta = 3, 4
----> 2 difference = gradient_check(x, theta, print_msg=True)

in gradient_check(x, theta, epsilon, print_msg)
43 # difference = # Step 3’
44 # YOUR CODE STARTS HERE
—> 45 numerator = np.linalg.norm(grad-gradapprox,2)
46 denominator = np.linalg.norm(grad,2)-np.linalg.norm(gradapprox,2)
47 difference = numerator /denominator

<array_function internals> in norm(*args, **kwargs)

/opt/conda/lib/python3.7/site-packages/numpy/linalg/linalg.py in norm(x, ord, axis, keepdims)
2555 return ret
2556 else:
→ 2557 raise ValueError(“Improper number of dimensions to norm.”)
2558
2559

ValueError: Improper number of dimensions to norm.

Help me understand how to calculate L2 with np.linalg.norm

Did you try reading the docpage for numpy linalg norm?

By the way, the reason for your previous error is not that the norm computed was the L1 norm. It’s that there is some other error in your logic. Please compare your code carefully to the formulas shown in the instructions.

Among other things, you will learn from reading the documentation that it defaults to computing the L2 norm.

1 Like

At least one error is visible in the exception trace that you show: the way you compute the denominator clearly differs from the math formulas you are trying to implement. We can’t see how you computed gradapprox, so you should examine that as well.

1 Like

Good day. I figured out why the difference = 1.0 . Since the dimensions of grad and gradapprox is float input, so according to the formula
formul
, will only matter grad and gradapprox so numerator=denominator and difference = numerator /denominator = 1
Knowing this doesn’t help me fix the code. Tell me which direction to go next
thank you for your time

Did you actually compare your code for the denominator to what the math formula says? What you wrote is not the same as the formula. The operation there is addition, but you have subtracted. Not the same thing, right? The numerator uses subtraction, but the denominator uses addition. Programming is a game of details. Your mistake was only a single character wrong, but that character was kind of a critical one, right?

1 Like

Good day. Yes, I’m so carried away L2
that made a mistake in the formula). Thank you very much for your attention.

I continue to get “There is a mistake in the backward propagation! difference = 1.0”
I have been through the threads and it seems an error in the denominator is common. I’ve looked through my code many times and I’m not seeing any errors in the calculations, even with the difference calculation. It seems the defaults are fine for np.linalg.norm based on the docs.

Are there other instances where you’ve noticed the difference coming out to 1?

Yes, if you use a - sign instead of + in the denominator.

1 Like

Nevermind. I found the bug by printing the grad and grapprox arrays. I found that my gradapprox array was all zeros because I was adding epsilon to theta_minus[i] instead of subtracting it to get the new theta_minus[i]. This caused all J_plus[i] and J_minus[i] to be the same and the difference resulted in zero.

Thanks for the reply! It turns out it was this type of error but not with the denominator.

Glad you were able to find the mistake under your own power, even if my hint was not relevant in your case. Onward! :nerd_face:

1 Like