C1_W4_Assignment: Grader Error

C1_W4_Assignment: * ### Grader Error: Grader feedback not found

Visit the Discussion forum to see if your peers are experiencing or have found resolutions for similar errors. If the error isn’t resolved in 24 hours, please reach out to Coursera through our Help Center.

Please help me to solve this

I assume that you have completed all the graded portions of the notebook and that all of them pass the internal tests in the notebook.

If that is true, then one easy experiment to try is:

Kernel -> Restart and Clear Output
Save
Submit

The point is that the grader does not need to see your generated output in the notebook and sometimes large output can cause issues for the grader. Please try that and let us know if it helps.

If that doesn’t fix things, then there must be something structural wrong. Either your code is not general and you are failing one of the grader’s test cases or you’ve accidentally damaged some portion of the notebook that the grader depends on.

Maybe the simpler method would be just to look at your notebook. We can’t do that on a public thread, but there are private ways to do that. Please check your messages for a DM from me (you can recognize DMs by the little envelope icon).

1 Like

To close the loop on the public thread, it turns out that quite a few pretty radical changes were made in the code outside the “YOUR CODE HERE” sections, including changing the function signatures of several of the graded functions. That does not end well. Need to take a step back and start with a clean notebook and “copy/paste” over just the required “YOUR CODE HERE” segments.

1 Like

Hello,

I’ve also encountered a significant issue with the grading process in the notebooks.

Despite achieving the expected outputs for all exercises, two specific exercises consistently score 0/10, stating that no notebook is present in the expected directory.

Here’s what I experienced:

  • I managed to achieve a 90% score by re-executing and submitting the faulty exercises again.
  • I also tried restarting the kernel, clearing all outputs, and re-executing all cells, but this only resulted in an 80% score !

This is disappointing, especially since this is not a new course, and such issues undermine the learning experience. I’m familiar with the DeepLearning.AI grading process, so I was able to work around the issue, but I worry that others may not have the same level of patience or familiarity.

The graded notebook system needs a proper update and thorough testing to ensure that such errors don’t occur.

Here is my best result, despite passing all the tests correctly and producing the expected outputs:


Test Results

100/110

Score: 100 of 110

YesAssignment passed

Test_get_matrices

Filename: Test_get_matrices

10/10Score: 10 of 10

test_compute_loss

Filename: test_compute_loss

10/10Score: 10 of 10

test_compute_gradient

Filename: test_compute_gradient

0/10Score: 0 of 10

Grader output

No notebook was found in the submission directory.

test_align_embeddings

Filename: test_align_embeddings

10/10Score: 10 of 10

test_nearest_neighbor

Filename: test_nearest_neighbor

10/10Score: 10 of 10

unittest_test_vocabulary

Filename: unittest_test_vocabulary

10/10Score: 10 of 10

test_get_document_embedding

Filename: test_get_document_embedding

10/10Score: 10 of 10

test_get_document_vecs

Filename: test_get_document_vecs

10/10Score: 10 of 10

test_hash_value_of_vector

Filename: test_hash_value_of_vector

10/10Score: 10 of 10

test_make_hash_table

Filename: test_make_hash_table

10/10Score: 10 of 10

test_approximate_knn

Filename: test_approximate_knn

10/10Score: 10 of 10

I hope this issue can be addressed soon to improve the reliability of the grading process.

Thank you!

can you share the screenshot of submission grader output mentioning the above statement.

Also make sure the compute gradient grade cell was not edited outside of the markers ###START AND END CODE HERE###. Chances are you might have removed the header Grade Function compute gradient, so the autograder couldn’t detect it.

We probably need to see your notebook to fully understand what is wrong here, but Deepti’s suggestion is one of the likely possibilities. We can’t share notebooks and code on a public thread, but there are private ways to do that. Please check your DMs for a message from me about how to proceed.

Note that in general just passing the tests in the notebook is not a guarantee that your code is completely correct. Just passing one test case can frequently be done in ways that are not general, e.g. by hard-coding things to match the particular test case. The grader usually uses different test cases, so that can still fail. But the particular error you are getting from the grader is not one I’ve ever seen before and it does sound like something is structurally wrong with your notebook.

Here is the screenshot.

Now it is Exercise 2, although I changed nothing except re-launching the grading!

I didn’t remove any headers and always kept the original structure of the code and the comments.

This seems random, so I think there is a bug. I can provide my saved file for verification if needed.

Christophe sent me the current notebook and I am able to submit and get 110/110 either with the full output of the notebook present or after doing:

Kernel -> Restart and Clear Output
Save
Submit

One other thing to ask: are you sure that you are not working from a renamed copy of the notebook? If you are, then clicking “Submit” does not grade the renamed copy. It grades the “official” copy, meaning the one opened by the “Launch Lab” link.

1 Like

Thank you @paulinpaloalto for your support and investigation into this issue. I can confirm that I am finally getting the proper score.

However, I still find it absolutely baffling that the solution was to not run all the cells after Kernel -> Restart and Clear Output and Save before submitting. This process feels entirely counterintuitive, as running all cells to ensure correctness should be the expected workflow before submission.

To clarify, I was not working from a renamed copy of the notebook. I always use the notebook launched directly via the “Launch Lab” link.

I hope the grading process can be clarified and made more consistent in the future, as this was an unnecessarily frustrating experience.

Christophe

The point of the “Clear Output” is that the grader does not need to see the output: it only needs to call your functions. Sending a larger volume of output to the grader than necessary slows things down and is a waste of resources. And sometimes it can happen that there is syntax in the output that seems to confuse the grader. You would think that they would have defense mechanisms against that, but the graders are a “black box” to me: meaning that I have no idea how they are implemented. And I actually think that may be a true statement for the course staff as well: the grading platform is simply provided by Coursera. Part of the difficulty with everything here is that there are two completely separate layers of “providers” here:

Coursera is the overall platform. They provide developer tools for creating courses and then the actual runtime platform to host the courses, but they do not create any of the actual courses.

The various Course Providers, which of course includes DeepLearning.AI and lots of other providers like universities around the world, have to learn how to use Coursera’s development tools to create their course materials and to use the runtime platform to present them to us the students.

When there are problems, it is frequently a challenge to figure out who is responsible. Of course we as the students have no direct way to contact Coursera or the DLAI team. We can only discuss here on the forums. The mentors are just fellow student volunteers. We don’t get paid to do this. Over the years, I have learned the names of some of the staff people at DLAI whom I can contact when there is a real problem like the one you saw. But there is typically no guarantee that they will be able to sort out whether the problem is in the way they have written the scripts for the grader or in the grader platform itself.

1 Like

Thank you @paulinpaloalto for the detailed explanation, and for taking the time to clarify how the grading process works.

This was the first time I’ve encountered such an issue, despite completing several DeepLearning.AI specializations and courses. The trick you shared—using Kernel -> Restart and Clear Output will definitely serve me well in the future for avoiding similar problems.

I understand now that the grader’s behavior can be unpredictable due to the layers of complexity between Coursera and the course providers like DeepLearning.AI. It’s unfortunate that there isn’t a more transparent way to address such issues directly, but I really appreciate your efforts to investigate and resolve the problem. Your explanation sheds light on the challenges behind the scenes.

Thanks again for your support and insights!

Christophe

1 Like