I overlooked the two assignments of the first week and tried to finish them in overdue time. I am quite sure now the code should be ok since its test output are as expected. However, it is not passed, and no info is given which code cell is failing. Is it possible to to have more detailed code check result?
I have the exact same issue and I have also checked Paulās suggestion but its not working as I have already used ānp.linalg.normā in my āgradient_check_nā function. Please help me out.
i have same problem
There is a mistake in the backward propagation! difference = 0.1580457629638873
on step #4 - i perfromed the theta_plus/minus and difference as defined but still very odd
Iām posting a short summary of the issue, @Stuti, in case it helps others with the same problem.
There is no need to call backward_propagation from gradient_check_n, since you are already given the gradients. However, Iām not sure why the hidden test was failing, since the calculation was technically correct.
I donāt think it is the same problem, @eitan. The value of difference is incorrect (the expected output is shown in the notebook). Double check your calculations first.
Hi, I have the same problem where the hidden test if failing for gradient_check_n.
I double check the implementation multiple times. Looks reasonable. Maybe something wrong in hidden test side?
I have same problem too, i am getting same error message
assert not(type(difference) == np.ndarray), āYou are not using np.linalg.norm for numerator or denominatorā
but i am indeed using np.linalg.norm
I have double and triple checked cannot see any issues
EDIT TO CORRECT MYSELF: ah, the issue was right in the error message I was using lowercase x and y, not uppercase x and y. I had copied from my previous exercise! All good now.
@Bryanby managed to fix it too. A hint that may help other learners: Pay attention to the difference bewteen and . Good luck with the rest of the course!
@nramon I have the same problem as the original issue with @yes333 . I followed through all the suggested solutions and have no problem with any of the things specified in the thread till now.
Exercise 3: # GRADED FUNCTION: gradient_check
Your backward propagation works perfectly fine! difference = 2.919335883291695e-10
All tests passed.
Exercise 4: # GRADED FUNCTION: gradient_check_n
Your backward propagation works perfectly fine! difference = 1.1890913024229996e-07 (After correcting the small error in backward_propagation_n)
After getting all desired output and submission, strangely iām getting the below grader output with 60/100 score:
[ValidateApp | INFO] Validating ā/home/jovyan/work/submitted/courseraLearner/W1A3/Gradient_Checking.ipynbā
[ValidateApp | INFO] Executing notebook with kernel: python3
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.
I went through similar topics and tried all the suggestions by @nramon , @paulinpaloalto , issue of @Stuti and issue of @blackfeather. But no luck with my successfull submission.
Interesting! The difference value you show is exactly what I got after fixing the āfakeā bugs that they put in the back prop logic. Maybe the problem is not in the gradient_check_n code, which is the āhardā part. Maybe itās in one of the previous sections. Itās unfortunate that the grader canāt give more specific feedback about this. Believe me, weāve complained to the course staff about this, but apparently itās a limitation of the Coursera grading platform. Sigh.
Check carefully your āone dimensionalā gradient check code. Note that they donāt give you an āexpected valueā to compare against for that one. Hereās what I get with my implementation that passes the grader:
Your backward propagation works perfectly fine! difference = 2.919335883291695e-10
All tests passed.
Thanks to @paulinpaloalto , he rightly pointed the bug. I had oddly used keepdims=True on the np.linalg.norm calls followed by .item() . Somehow the grader was not throwing error for this in hidden test cases. Once I removed keepdims from np.linalg.norm calls, it resolved the issue
Which part of the keepdims=True did you remove? I had removed all keepdims=True from the np.linalg.norm calls (also tested only on gradient_check_n and it didnt work). I donāt have .item() in my code, so not sure which np.linalg.norm you talked about in your response. Thank you.
np.linalg.norm is only used in the computation of the difference values, which is the final step in both gradient_check and gradient_check_n. The only instances of the use of keepdims as an argument to np.sum in this notebook are in the given code for back propagation.
What is the indication of failure that you are seeing?
Update: I found your other post and replied over there.