Thank you. I’ll ping in 24 hours if there’s no update from you.
@gent.spah ping. Could you extend my subscription for free? Thanks.
Although they have told me previously that I have golden hands, I am not a miracle worker
Let me know once the grader is fixed
Can you elaborate? The refresh didn’t work - you couldn’t get a baseline notebook? Or, you got the baseline notebook and reimplemented your code into it, it passed the local unit tests but still failed the grader? Was the utils file also refreshed? I might sign up for this class just so I can see the code and try to understand what Python voodoo is at play here. Edge cases come along so rarely (by definition)
He is able to pass the assignment though just that particular part 1/10 is not going through but he can pass.
@ai_curious There’s a bug with the grader where it marks a passing test as failed.
The tests pass on coursera environment. Guess we’ll have to wait and see the issue getting resolved.
Cheers.
Hi Balaji! Sorry for the delayed response. I just sent you a message. We may need to replicate the issue to see exactly where the grader is failing. Kindly check your inbox. Thank you!
FYI, I was able to replicate this error just now…
This occurred after success running the cell in the notebook:
Generally, the test cases in the notebooks should be seen as necessary, but not sufficient, for passing the test cases in the grader. For one, you have access to the code running in the notebook and can just hard code the ‘right’ answer (you may scoff, but people do it.) You could also have gotten the answer accidentally, such as writing area = width * width which only works when the test object dimensions passed into the function is a square - if autograder used a rectangle it would fail. Or, if a variable with global scope is used in the notebook that doesn’t exist in the same scope in the grader runtime, since it is loading and executing python cells dynamically and doesn’t have the same namespace as when the notebook is run sequentially.
However, after playing with the code myself, I no longer believe any of these situations is occurring here. Instead, it seems more likely related to the timeout message.
[ValidateApp | ERROR] Timeout waiting for execute reply (30s).
[ValidateApp | ERROR] Interrupting kernel
Looking forward to what the QA people find out.
Interesting, indeed lets see what they find out.
Hi all! Nomen (ai_curious) is right! The grader backend for this course has been updated recently and it now implements a max time for a test cell to execute. This wasn’t the case before and the same solution gets 10/10. We have now reduced the default epochs from 180 to 40. This should now let the last test case pass. You can do that as well in your notebooks and resubmit the assignment.
Thank you again everyone for pointing this out! We’ll check the other assignments if there are also timeouts occurring.
Thanks @gent.spah and @chris.favila for the research and resolution.
Thank you for your input @ai_curious
@chris.favila Just submitted the assignment. The same failure continues to exist.
@chris.favila please wait. i set the epochs to 180 this time. will update soon.
@chris.favila Setting epochs to 40 grades the assignment properly. Thank you. sorry about the mistake.
No prob, Balaji! Glad it’s now working!