Hi Moderator,
for get_country & get_accuracy assignments, I am getting the expected output as mentioned.
But during assignment grading, I am getting the below error.
“There was a problem grading your submission. Check stderr for more details.”
Kindly help.
Hi @Arjun_Palusa
99% of the time when you pass the unit tests (and all your outputs match the expected outputs) but grading of the assignment fails, it is one of those reasons:
Important Note on Submission to the AutoGrader
Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:
- You have not added any extra
print
statement(s) in the assignment.
- You have not added any extra code cell(s) in the assignment.
- You have not changed any of the function parameters.
- You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.
- You are not changing the assignment code where it is not required, like creating extra variables.
80% of these are:
- You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.
For example, if you have local variable named embeddings
then do not use global variable named word_embeddings
. (Here is a Video what is being meant by global vs. local variables and the playground to experiment.)
So please make sure you do not that.
Cheers
1 Like
Thank you a lot.
As it was already in code, I did not gave it a thought.
1 Like
I am having the same issue. Were you able to fix yours?
Yes, we need to change the name of the variables as mentioned in anwer
I am still unable to get the grade.
I am facing the same issue.
Also, I did not run the code on my own machine as shown in the assignment test (as shown below).
Is this a mandatory step to pass the assignment? I thought it was not…
To Run This Code On Your Own Machine:
Note that because the original google news word embedding dataset is about 3.64 gigabytes, the workspace is not able to handle the full file set. So we’ve downloaded the full dataset, extracted a sample of the words that we’re going to analyze in this assignment, and saved it in a pickle file called word_embeddings_capitals.p