C1M2_Assignment - Errors on Submit

Completed all the code for
generate_draft
reflect_on_draft
revise_draft

  • execute code
  • no errors
  • submit

Reported Errors:
Exercise 02
Failed test case: reflect_on_draft raised LLMError: An error occurred: Error code: 400

Exercise 03
Failed test case: revise_draft raised AttributeError: ‘NoneType’ object has no attribute ‘choices’.

Troubleshooting

  • reloaded kernel
  • walked thru code in Kilo
  • completely re-entered code

Can’t get this module certified.

I’m getting the feeling the auto-grader is buggy.

1 Like

I’m not a mentor for this course, so it’s just a general question:

Did you complete the entire assignment, before you sent it for grading?

@IMTanuki

For exercise 2, check if your code for the user prompt is correctly stated as it is causing bad request http error code 400 which could be related to content type statement used in the prompt.

Another way to check would clear out the cache and browsing history and re-do the assignment, if you are confident that codes were correctly written

Also for exercise 3, autograder feedback is pointing you to recheck! your codes again

Not quite sure what you are suggesting, but:

  • each exercise’s unittest checked out - no problems
  • the entire module executes correctly - output as expected
  • I rebuilt the entire nb
  • I rebuilt the nb in an incognito window

At this point, I’ve invested more time than I can justify on this - if the unittest for each exercise checks out and the entire module generates output as expcted, then I suspect the grading algo is “over-sensitive” to syntax and/or logic differences - ie code that, while correct, is different from what DL is expecting.

rebuilt entire nb? please elaborate on this?

rebuilt nb in cognito window-explain this?

i hope you are working on the notebook assignment in the course provided assignment.

passing doesn’t always confirm codes as per autograder.

Allso sometimes clearing out kernel and reconnect the kernel and then run all the cell, results in successfull submission.

if all this still results in grade failure, then please send your codes screenshot by personal DM for review.

Regards
DP

Rebuilt nb - just what it says - I re-entered all the code to complete the exercises.

Incognito - self-explanatory.

i hope you are working on the notebook assignment in the course provided assignment - no comment.

Reconnecting kernel - multiple times.

Screenshot - again, if the unit tests work and the code executes and generates expected output, the problem is the auto-grader.

That’s not necessarily the case. The grader always uses entirely different tests than the ones provided in the lab. The built-in lab tests do not check for every possible condition.

Your code must work for any situation.

This is an ingtro to agentic ai. We are not writing production-ready apps.

The grader should not be testing for issues that have not been covered in the course.

It doesn’t test for new issues. It tests whether your solution works correctly.

Consider that simply duplicating the expected results given in the lab would not be worthy of a passing grade.

At the risk of repeating myself, and it seems I need to, if the code I wrote within the START CODE / END CODE boundaries 1) satisfies the unit tests; 2) executes in its entirety without error; and 3) generates expected output, then the issue is the auto grader.

Subjective assessments (eg code quality) vs discrete auto-graders are often over-sensitive to format variations, etc.

Early generations of auto-graders (eg college course auto graders) were plagued with this problem.

I think we’ve taken this topic as far as we can for now.

Only in that your code doesn’t pass the tests the grader uses.

1 Like

@IMTanuki

kindly be respective with learners and especially mentors in discourse forum. the idea of pointing why your grader was failing is to encourage you to get better at debugging and nothing else.

So please be kind especially when you reply to a mentor who is helping you.

Likewise, I would ask that you respect my desire to terminate a thread when the discussion is no longer productive. I have subject matter expertise in auto-graders and when I explain (multiple times) that the auto-grader is over-sensitive in grading, I would appreciate if you would respect my experience in this domain.

Perhaps the autograders here are not as good as the ones you have worked with in your prior experience. In most of the courses here that I am familiar with (I don’t actually know this particular course), it is frequently possible to pass the local notebook test cases with code that is not general. E.g. to reference global variables directly that are passed as arguments to your functions or hard-code assumptions about the dimensions of the inputs.

Just passing the tests in the notebook is not sufficient evidence that your code is fully correct and will pass the autograder. If you care about getting the full score from the grader on this assignment, you need to get a mentor for this course to look at your notebook.

2 Likes