Assignment grade 0 even though unnittest works fine and output is as expected3

My Name is Nick and I am doing the course on “Introduction to Rag models”

My problem is as follows…

I am doing the course deeplearning.ai course Introduction to RAG Systems. I am struggling to submit the workbook for the first lab. I get graded at 0% even though the ‘unittest’ function indicates that my answers are correct. All cells in the notebook work fine. The error message is as follows;

There was a problem compiling the code from your notebook, please check that you saved before submitting. Details:f-string: unmatched ‘[’ (, line 71)

It’s difficult to track this because only code cells appear to have line numbers in Jupyter notebooks. I have looked through all the f-strings but they are fine. I have also checked for errors in the downloaded notebook in Visual Studio, and all syntax is fine.

Hi @NickRiches

It could be your assignment has not be saved, and the grader is not picking up the latest version. Try to do a clean run , saved it before submitting to the grader.

Kernel → restart and clear all output

Cell → run all

Thanks. No that makes no difference. I still get zero points, even though the unittests are passed.

Check if your notebook has any ) or ] isolated on a line by itself.

The grader copies the graded parts of your code into a different script, and the copy method doesn’t always work correctly in that case.

It does. But that shouldn’t make a difference. When my function generates a list containing dictionaries and produces this at the end of the cell, the square bracket is on the first line and the curly bracket is on the second line, e.g.

[
  {key: value}, 
  {key: value},
  {key: value}
]

The desired answer is as follows

[ {key: value}, 
  {key: value},
  {key: value} ]

The marking system seems to be picking up on this. But these are the same values and the same data structure. So the answer is technically correct. Moreover, placing the square brackets on their own line, just seems to be the way that the Jupyter notebook outputs the results. How do I get it Jupyter to output the response in a format consistent with the ideal answer?

This is different to the desired answer, which has the square

Does anyone know how I can contact deeplearning.ai. I am perturbed that I am giving the right answer, but am unable to complete the exercise.

The DL.AI support staff for this course seems to be temporarily unavailable. Also this is a new course and doesn’t have any active mentors yet.

Just trying to help here - I’ve never seen this course material personally. But the issue you are reporting is also seen in other courses.

The issue is likely due to the format of your code - not the results that your code generates.

This is because of how the grader extracts your code from the notebook file and tests it in a separate execution environment. Sometimes it makes mistakes if your code has isolated code elements (like brackets or parens) on a separate line, or with unexpected indentation.

So don’t look at what your code is outputting - look at how your code is formatted in the notebook.

Thanks. I suspected this is what is happening. I could spend ages trying to tweak the output, e.g. pass it to a string and do some string processing, but even then, I doubt it would be exactly the format required.

So I’m pretty stuck here.

It’s a shame. The course materials are very good - a notch above other courses. But it’s frustrating to be unable to get the assignments effectively graded.

(not that completion certificates are valuable in and of themselves).

1 Like

Hey @NickRiches, happy to jump in here to help you troubleshoot as I’m a current mentor for RAG (and was a tester as well).

I just went through the M1 assignment notebook again on Coursera to see if I could replicate the issue you’re experiencing, but I was able to submit with the autograder successfully. What I might recommend is to refresh your workspace if you haven’t already (you should see a section in M1 titled ‘(Optional) Downloading your Notebook and Refreshing your Workspace’) and then go through the lab again, only changing what’s in between the start / end code blocks of the graded cell.

Also, do you know which of the two graded cells are giving you the most trouble? As @TMosh mentioned (and thank you for jumping in, Tom!) this may be an issue of formatting. Check how you’ve defined the ‘formatted_document’ variable in Exercise 2, that one is a trickier one.

Let me know if any of the above works for you.
L

1 Like

Hi @NickRiches,

Thanks for your message! This issue likely arises because the same type of quotation marks is being used for both the f-string delimiters and the dictionary keys. For example:

f_string = f"This is an example, dictionary: {dict["key"]}"

While the Jupyter environment is more flexible and may still run this code, the autograder converts your notebook into a Python script for grading, and that’s where the code is breaking.

To fix this, try using double quotation marks (") for the f-string delimiters and single quotation marks (') for the dictionary keys, like this:

f_string = f"This is an example, dictionary: {dict['key']}"

(or vice versa). This change should resolve the issue and allow you to receive full marks.

I’m currently working on updating the autograder to better detect this specific error and provide a clearer output message for debugging.

Best,

Lucas

1 Like

Update: Now the grader feedback outputs the following message in this case

There was a problem compiling your code.
It looks like you used the same quotation marks for the f-string delimiter and for dictionary keys in Exercise 2. To fix this, use single quotes inside double quotes (e.g. f"{document['key']}") or vice versa. 
Details: f-string: unmatched '[' (<unknown>, line 72)

1 Like

Thank both. I’m trying to resubmit, but the Jupyter notebooks seem to be down (I’ve tried on two different browsers). I’ll try again tomorrow.

Just returned to this (after a long time away). The formatting of the quotation marks (using a single quotation mark to refer to a dictionary element) solved this. Many thanks.

Can someone please help me look into a similar issue? I got the below error after submitting the Model 2 programming assignment for the 3rd and final graded cell which involves implementing an RRF function:

”[Errno 2] No such file or directory: ‘/shared/submission/submission.ipynb’:”

I’m pretty sure my function was correct, all unit tests passed, and I somehow got 0/33 pts. I’d like to get a regrade if possible and ideally full credit restored for that question.