C5W2 Cell #14. Can't compile the student's code. Error: KeyError('never',)

Error when grading: Cell #14. Can’t compile the student’s code. Error: KeyError(‘never’,)

  • Checked the ex1 using any_word from word_to_vec_map shape

Does someone know how to handle this? Nothing seems to work so far

Thanks in advance!

If you are sure that everything runs correctly in the notebook, then the thing to look for is errors like referencing global variables within the body of your functions, instead of the formal parameters. In other words, if you pass one test, but fail another, it means your code is not general. It is “hard-coded” in some way.

Thank you again, @paulinpaloalto. I found the mistake when calculating the avg in another cell.

Hi, I’m stuck on the same error. Could you please help me? What is the “cell #14” exactly?

Is it # GRADED FUNCTION: model ? # GRADED FUNCTION: sentence_to_avg ?

How can I locate the problem line if all checks passed and only submitting assignment failed?

I’m sure that I didn’t use any hard code there

sorry, could you please write me what cell is it?

What is the exact error message you are getting from the compiler? If it’s the same as the OP of this thread, then I’d start by searching for “never” in your notebook. The way cells are numbered at execution time should be in order from the beginning of the notebook, but you don’t really know what else the grader is executing besides your code, so I don’t know if there is any reliable way to identify the cell from the execution number.

If you can’t find the issue, there are other ways to proceed here.

If you use the forum Search tool for the term “KeyError(‘never’)”, you will find other threads on what causes this issue and how to fix it.

Generally it’s due to not using one of the dictionaries correctly.

Unfortunately, This didn’t help me. Of course, I spent some time trying to find the solution before asking you guys. I value your time. Usually, it is enough to spend some extra time running all cells step-by-step, reading instructions one more time, and so on. But here, I don’t have any debug info to fix this issue. If I had a code of autograder I could try. I’m a programmer, not a psychic :slight_smile:

This is my autograder error

Every graded function shows that all good

Most of the comments for this type of error are focused on the correct calculation of the average variable. Can I please send you my code directly? I feel broken to find an issue on my own. I don’t have sufficient feedback from this system to debug the error. I just need to make some guesses and try… pretty crazy

Sorry that the grader message is so opaque in this case. Yes, we need to look at your notebook as a file as the next step in figuring this out. My guess is that there is something structurally wrong, e.g. added cells or corrupted JSON metadata or the like. The way to share the notebook is by DM and I see you already started a DM thread about this.

We will confirm the results here once we have more info.

Nine times out of ten, this issue is caused by using a global variable, or a vocabulary word that is not in the dictionary provided by the grader uses, in the code that computes sentence_to_avg().

1 Like

I suspect either there might be some editing outside of where you are not suppose to write or remove codes, or check if any of function was not recalled according to the instructions.

if you are 100% sure there are no mistake then get a fresh copy of assignment, then by referring your saved copy, re-do and submit.

let us know if issue still persist

Yes, maybe I changed something by mistake. How to refresh the lab to a fresh copy? I tried to find this in UI…

Hello @Dmitry_Regent

Was your notebook reviewed once by the any of the mentors here as I can see mention of conversation of DM.

Make sure you have saved a copy of the assignment you did.

Kindly do the following to get a fresh copy of assignment.

Click File==== then open==== delete all the files on this page. Now you will find a 404 error on your web page.
Ensure you have deleted all your cache and browser history before you download a fresh copy. Log out of your Coursera account.
Re-Login.
Click Help on the right top corner, then reboot.
Once rebooting is done, click help again. At this time, click Get the Latest version and then Update Lab.
You have a fresh updated copy of the assignment.
Re-do your assignment by referring at your saved copy and ensure you write codes only where it is instructed between ###START AND END CODE HERE ### and do not edit anything outside of this.

Let us know if issue still persist.

Regards
DP

There is a topic about that on the DLS FAQ Thread. It’s the first one in the catalog …

Ok, it turns out that Tom was right on the money with that theory. The bug was that in the model function (UNQ_C2), the logic does not call sentence_to_avg, but just reimplements the “guts” of it. But does it in a way that omits one critical piece: the handling of the case that the sentence contains a word that is not in the embedding dictionary.

So the grader is probing model with a test case that hits that situation and “Kaboom!” The only question now is whether there is a reasonable way to either 1) upgrade the test in the notebook to catch that error in model or 2) update the grader test case to fail in a more helpful way.

I’ll think about option 1), since the course staff seem to be pretty reluctant to change grader code.

2 Likes

I believe i raised an issue on this previously, might have been closed with no action taken. Maybe needs a new unit test to simulate what the grader is doing.

2 Likes

If I can find some time this weekend, I’ll try to come up with an additional “in notebook” test case to catch that error. If I can do it in a way that is not too cumbersome, I’ll file a git issue with the suggested code. “Our top story tonight: the notebook tests don’t catch all possible errors.”

1 Like

Actually there’s a very easy way to catch this in the notebook: they construct a small dictionary, so just add a missing word to one of the test sentences. The only downside of this approach is that you’d have to work a bit harder to generate a meaningful error message, but the code will just “throw” with exactly the same kind of error that the grader gives. E.g. add the word “missing” to one of the input test sentences and you get an exception trace that terminates in this:

KeyError: 'missing'

It’s more helpful in this instance in that you can actually see the test case. So some thinking will be required, but it’s better than the completely opacity you get if the test case itself is buried in the bowels of the grader.

Update: ok, I just filed a git issue with this suggestion. We’ll see what they say …

1 Like