Gradient Checking, doubt about failed code checking on submission

I overlooked the two assignments of the first week and tried to finish them in overdue time. I am quite sure now the code should be ok since its test output are as expected. However, it is not passed, and no info is given which code cell is failing. Is it possible to to have more detailed code check result?

Sanyu

Hi, @yes333.

Could you check if @paulinpaloalto’s suggestion fixes your problem?

2 Likes

thanks, it works with np.linalg.norm.

Hello, @nramon,

I have the exact same issue and I have also checked Paul’s suggestion but its not working as I have already used ā€œnp.linalg.normā€ in my ā€œgradient_check_nā€ function. Please help me out.

i have same problem
There is a mistake in the backward propagation! difference = 0.1580457629638873
on step #4 - i perfromed the theta_plus/minus and difference as defined but still very odd

I’m posting a short summary of the issue, @Stuti, in case it helps others with the same problem.

There is no need to call backward_propagation from gradient_check_n, since you are already given the gradients. However, I’m not sure why the hidden test was failing, since the calculation was technically correct.

1 Like

I don’t think it is the same problem, @eitan. The value of difference is incorrect (the expected output is shown in the notebook). Double check your calculations first.

1 Like

Hi, I have the same problem where the hidden test if failing for gradient_check_n.
I double check the implementation multiple times. Looks reasonable. Maybe something wrong in hidden test side?

I have similar problem. I checked the type for difference and it np.float64. But I still get the assertion. Appreciate the input.

I have same problem too, i am getting same error message

assert not(type(difference) == np.ndarray), ā€œYou are not using np.linalg.norm for numerator or denominatorā€

but i am indeed using np.linalg.norm

I have double and triple checked cannot see any issues

EDIT TO CORRECT MYSELF: ah, the issue was right in the error message I was using lowercase x and y, not uppercase x and y. I had copied from my previous exercise! All good now.

2 Likes

In @bhuviram’s case, the error was caused by a small implementation mistake that could be fixed by double-checking the code.

1 Like

@Bryanby managed to fix it too. A hint that may help other learners: Pay attention to the difference bewteen l2 and l2_squared. Good luck with the rest of the course! :slight_smile:

1 Like

I still have the problem, please help

Fixed for @blackfeather. I’m posting a link to the cause of the problem in case it helps others.

Good luck with the rest of the course, @blackfeather :slight_smile:

1 Like

@nramon I have the same problem as the original issue with @yes333 . I followed through all the suggested solutions and have no problem with any of the things specified in the thread till now.

My Exercise wise output results are as follows:

  1. Exercise 1: # GRADED FUNCTION: forward_propagation
    J = 8
    All tests passed.

  2. Exercise 2: # GRADED FUNCTION: backward_propagation
    dtheta = 2
    All tests passed.

  3. Exercise 3: # GRADED FUNCTION: gradient_check
    Your backward propagation works perfectly fine! difference = 2.919335883291695e-10
    All tests passed.

  4. Exercise 4: # GRADED FUNCTION: gradient_check_n
    Your backward propagation works perfectly fine! difference = 1.1890913024229996e-07 (After correcting the small error in backward_propagation_n)

After getting all desired output and submission, strangely i’m getting the below grader output with 60/100 score:

[ValidateApp | INFO] Validating ā€˜/home/jovyan/work/submitted/courseraLearner/W1A3/Gradient_Checking.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.

I went through similar topics and tried all the suggestions by @nramon , @paulinpaloalto , issue of @Stuti and issue of @blackfeather. But no luck with my successfull submission.

Any help for me will be appreciated :slight_smile:

Hi, @abhinomega.

Interesting! The difference value you show is exactly what I got after fixing the ā€œfakeā€ bugs that they put in the back prop logic. Maybe the problem is not in the gradient_check_n code, which is the ā€œhardā€ part. Maybe it’s in one of the previous sections. It’s unfortunate that the grader can’t give more specific feedback about this. Believe me, we’ve complained to the course staff about this, but apparently it’s a limitation of the Coursera grading platform. Sigh.

Check carefully your ā€œone dimensionalā€ gradient check code. Note that they don’t give you an ā€œexpected valueā€ to compare against for that one. Here’s what I get with my implementation that passes the grader:

Your backward propagation works perfectly fine! difference = 2.919335883291695e-10
 All tests passed.

Do you get exactly the same for that one as well?

1 Like

Hi @paulinpaloalto ,

Yes I am getting exactly the same value as I specified before.

Your backward propagation works perfectly fine! difference = 2.919335883291695e-10

I even downloaded the jupiter notebook in my local and ran the commented #gradient_check_test(gradient_check) method also…

with uncommenting it got below output.

Your backward propagation works perfectly fine! difference = 2.919335883291695e-10
 All tests passed.

Thanks to @paulinpaloalto :slight_smile: , he rightly pointed the bug. I had oddly used keepdims=True on the np.linalg.norm calls followed by .item() . Somehow the grader was not throwing error for this in hidden test cases. Once I removed keepdims from np.linalg.norm calls, it resolved the issue :slight_smile:

2 Likes

Which part of the keepdims=True did you remove? I had removed all keepdims=True from the np.linalg.norm calls (also tested only on gradient_check_n and it didnt work). I don’t have .item() in my code, so not sure which np.linalg.norm you talked about in your response. Thank you.

np.linalg.norm is only used in the computation of the difference values, which is the final step in both gradient_check and gradient_check_n. The only instances of the use of keepdims as an argument to np.sum in this notebook are in the given code for back propagation.

What is the indication of failure that you are seeing?

Update: I found your other post and replied over there.