For exercise 2, check if your code for the user prompt is correctly stated as it is causing bad request http error code 400 which could be related to content type statement used in the prompt.
Another way to check would clear out the cache and browsing history and re-do the assignment, if you are confident that codes were correctly written
Also for exercise 3, autograder feedback is pointing you to recheck! your codes again
each exercise’s unittest checked out - no problems
the entire module executes correctly - output as expected
I rebuilt the entire nb
I rebuilt the nb in an incognito window
At this point, I’ve invested more time than I can justify on this - if the unittest for each exercise checks out and the entire module generates output as expcted, then I suspect the grading algo is “over-sensitive” to syntax and/or logic differences - ie code that, while correct, is different from what DL is expecting.
That’s not necessarily the case. The grader always uses entirely different tests than the ones provided in the lab. The built-in lab tests do not check for every possible condition.
At the risk of repeating myself, and it seems I need to, if the code I wrote within the START CODE / END CODE boundaries 1) satisfies the unit tests; 2) executes in its entirety without error; and 3) generates expected output, then the issue is the auto grader.
Subjective assessments (eg code quality) vs discrete auto-graders are often over-sensitive to format variations, etc.
Early generations of auto-graders (eg college course auto graders) were plagued with this problem.
I think we’ve taken this topic as far as we can for now.
kindly be respective with learners and especially mentors in discourse forum. the idea of pointing why your grader was failing is to encourage you to get better at debugging and nothing else.
So please be kind especially when you reply to a mentor who is helping you.
Likewise, I would ask that you respect my desire to terminate a thread when the discussion is no longer productive. I have subject matter expertise in auto-graders and when I explain (multiple times) that the auto-grader is over-sensitive in grading, I would appreciate if you would respect my experience in this domain.
Perhaps the autograders here are not as good as the ones you have worked with in your prior experience. In most of the courses here that I am familiar with (I don’t actually know this particular course), it is frequently possible to pass the local notebook test cases with code that is not general. E.g. to reference global variables directly that are passed as arguments to your functions or hard-code assumptions about the dimensions of the inputs.
Just passing the tests in the notebook is not sufficient evidence that your code is fully correct and will pass the autograder. If you care about getting the full score from the grader on this assignment, you need to get a mentor for this course to look at your notebook.