I think the autograder is using an old cached copy of my notebook.
Here are the details you might need:
Course: Deep Learning Specialization, Course 1, Week 1
Assignment: Gradient Checking (W1A3)
Latest submission time shown on Coursera: 30 Jun 2025, 12:54 BST
SHA-1 of the notebook currently in /submitted/courseraLearner/W1A3:
c0e360ebf02c74267a394a8b5d06191031f1d5b6
Local validation result: nbgrader validate /submitted/courseraLearner/W1A3/Gradient_Checking.ipynb
→ Success! Your notebook passes all the tests.
UTC time when I calculated the SHA-1: Mon 30 Jun 2025 12:54:47 UTC
Even though the notebook validates locally, the Coursera grader still
returns “Tests failed on 1 cell(s)” and a score of 60 %.
Could you please clear the grader cache (or re-queue) for my latest
submission so it re-evaluates the notebook with the SHA-1 above?
I’m confident it will pass the hidden tests once the cache is refreshed.
We cannot manipulate your environment or even look directly at your notebooks. My guess is that the grader uses a different test case than the ones in the notebook. So if you pass one test, but fail a different test, the thing to look for is ways in which your code is not general: you have done something that makes the code specific to the particular test case. Referencing global variables instead of the formal parameters of the function or hard-coding dimensions are typical ways of introducing that type of problem.
If that’s not enough of a hint, we aren’t allowed to share code on a public thread, but there are private ways to do that. Please check your DMs for a message from me.
To close the loop on the public thread, the problem is that the grader is checking the type of the difference value, but the tests in the notebook don’t care. If you cast the final difference value to type float, you fail the grader, even if the value is actually correct. The solution is simple: just don’t do any casts on the value and you end up with numpy.float64 and the grader is pleased.
Indeed. Compounded by the fact that it can’t be bothered to even tell us which function failed the test, let alone the specific thing it was not happy with about the output.
Actually it would be worth filing a bug. Even if it’s too scary to change the grader, there’s a workaround: add a test in the notebook that asserts the type is whatever the grader will accept for that value. We know that numpy.float64 works, but perhaps there are others that will also work. Sure it would be a bit of a hack, but better than leaving us with no alternative but to look at the code to figure out what is wrong.
I had the same issue and adding converting difference using np.float64() indeed solved the issue. If I may make a suggestion: maybe a hint could be included into the instructions of the notebook. If the grader’s logic can’t be changed, then I think it would save a lot of time and frustration on the students’ side to just tell them to make sure the returned value difference is type numpy.float64. In any case, was grateful to find help here, thank you!
Yes, as I mentioned in the discussions earlier on this thread, I’ve already filed an enhancement request to add an assertion in the notebook that detects and prevents this error. Sorry, but they haven’t gotten to implementing that yet (filed within the last two weeks).