Dear DeepLearning.AI team,
My lab submission for lab fvrkqlciehqe is failing (see screenshot attached) even though all unit tests pass and the output corresponds with the expected output. Could you tell me if this is a technical issue or I actually missed something?
Link: Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera
Hello @Florence_Pfammatter
Such grader failure details can happen if you are using either an obsolete copy of assignment, and/or add any extra cell, and/or deleted any cells, and/or edited part of code which you are not suppose to, or if your codes is incorrect or mismatches with autograder.
Hi @Florence_Pfammatter
Welcome to the community!
A lot of learners stumble upon the same problem (here is one of the many cases of the same problem).
I would advise to search the forum when you’re stuck on a problem (for example, type in the problematic function name into the search (top right corner -magnifying glass) and see the results). In most cases, you could find hints to get you going.
Cheers
1 Like
Since this is your first post on the forums, maybe it’s worth taking the point that Arvydas has given you here and state it as a more general principle:
Just because you pass the test cases in the notebook does not guarantee that your code is correct. There could be ways that you have written the code that are not “general”, meaning that they work for the one set of test cases in the notebook, but do not work when the grader uses a different test case. That is what happened here it looks like: you probably are referencing word_embeddings
from the local scope of your get_country
function. That is a bug because that variable is “global”: it is not defined in the scope of get_country
and was passed in as the embeddings
argument. The grader has a test case that passes a different dictionary to get_country
, so your code will fail that test.
In the courses here, it’s always a mistake to reference global variables from within your functions.
Other ways for code to be non-general would be “hard-coding” assumptions about the dimensions of the inputs.
2 Likes