Gradient Checking (W1A3) - Grader stuck at 60 % – local validation passes (please clear cache)

Hi mentors,

I think the autograder is using an old cached copy of my notebook.
Here are the details you might need:

Course: Deep Learning Specialization, Course 1, Week 1
Assignment: Gradient Checking (W1A3)
Latest submission time shown on Coursera: 30 Jun 2025, 12:54 BST
SHA-1 of the notebook currently in /submitted/courseraLearner/W1A3:
c0e360ebf02c74267a394a8b5d06191031f1d5b6
Local validation result:
nbgrader validate /submitted/courseraLearner/W1A3/Gradient_Checking.ipynb
Success! Your notebook passes all the tests.
UTC time when I calculated the SHA-1: Mon 30 Jun 2025 12:54:47 UTC

Even though the notebook validates locally, the Coursera grader still
returns “Tests failed on 1 cell(s)” and a score of 60 %.

Could you please clear the grader cache (or re-queue) for my latest
submission so it re-evaluates the notebook with the SHA-1 above?
I’m confident it will pass the hidden tests once the cache is refreshed.

Thank you very much!

We cannot manipulate your environment or even look directly at your notebooks. My guess is that the grader uses a different test case than the ones in the notebook. So if you pass one test, but fail a different test, the thing to look for is ways in which your code is not general: you have done something that makes the code specific to the particular test case. Referencing global variables instead of the formal parameters of the function or hard-coding dimensions are typical ways of introducing that type of problem.

If that’s not enough of a hint, we aren’t allowed to share code on a public thread, but there are private ways to do that. Please check your DMs for a message from me.

To close the loop on the public thread, the problem is that the grader is checking the type of the difference value, but the tests in the notebook don’t care. If you cast the final difference value to type float, you fail the grader, even if the value is actually correct. The solution is simple: just don’t do any casts on the value and you end up with numpy.float64 and the grader is pleased.

Wow. That’s entirely not helpful behavior by the grader.

Indeed. Compounded by the fact that it can’t be bothered to even tell us which function failed the test, let alone the specific thing it was not happy with about the output.

I’ll consider filing a bug about this, although in the past they have frequently been hesitant to modify the grader logic.

Actually it would be worth filing a bug. Even if it’s too scary to change the grader, there’s a workaround: add a test in the notebook that asserts the type is whatever the grader will accept for that value. We know that numpy.float64 works, but perhaps there are others that will also work. Sure it would be a bit of a hack, but better than leaving us with no alternative but to look at the code to figure out what is wrong.

1 Like

I think it best to add tests to the notebook, so that the grading process doesn’t add its own undocumented requirements.

Thanks both, submitted now with 100%

2 Likes

Just filed a bug suggesting that they add this additional assertion in the test cell for gradient_check_n:

assert type(difference) == np.float64, "The value must be a 64 bit floating point scalar."

BTW I confirmed that the grader also fails if you cast the type to numpy.float32, so the grader’s check must be equivalent to that assertion.

3 Likes