I’m having a problem where the notebook tests all pass, but the predictions later in the notebook get accuracy scores different than expected. For example, below are my accuracy scores for mini-batch gradient descent with plain, momentum, and adam:
optimization method accuracy cost shape
Gradient descent >71% smooth
Momentum >71% smooth
Adam >94% smoother
Also the grader indicates that a test has failed:
88/100Score: 88 of 100
[ValidateApp | INFO] Validating '/home/jovyan/work/submitted/courseraLearner/W2A1/Optimization_methods.ipynb'
[ValidateApp | INFO] Executing notebook with kernel: python3
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.
I have checked all indices of the mini-batches and verified that all equations are correct. Is there a way to find out what the hidden test is testing?
Any help would be appreciated.
The hidden test-cases are hidden with a purpose as it turns out You have to complete the assignments in such a way that your code is generic and can pass any relevant any hidden test cases, and not just the public test cases. If the learners are provided the access to the hidden test cases, it could be possible for them to back-engineer the solution out of the test cases.
The most common thing that leads to this issue is hard-coding variables, so, please ensure that you haven’t hard coded any variables. If you are still unable to figure out the issue, please DM your notebook as an attachment to me.
Thanks for the response. I can see that my question was a ambiguous. I wasn’t asking to see the test code but, rather, if there could be more instructive error comments that, say, name to the cell that is failing.
Anyway, your suggestion was helpful in solving the problem. I coded an index as ‘0’ when it should have been ‘:’, so thank-you.
I am glad that you were able to solve your error by yourself, after all, there’s nothing better than that.
As to this, the issue is actually there are an infinite number of errors that a learner could do, and in order to give specific error comments, one would have to design a test case tailored to each possible error, which as you can imagine, is not possible at all. Let me highlight it very clearly here that the dev team here spends a ton of time developing these test-cases, so that they can catch as many errors as possible, but once again, there’s only so much that anyone can do, hence, all the remaining errors have to be given some generic comments. I hope you understand the issue here.