I have run the notebook and passed all tests except for exercise 5. In exercise 5, I got 4 passed and 2 failed as shown in the attached screen. However, after I submitted for grading, the result showed 0 marks for exercises 3 and 4 (different from what I got in the notebook - I passes all). Is there technical issue on the grading software? I had tried to put in #grade-up-to-here within my code at different parts, but it cannot do anything on the autograding result and get marks for my passes exercises 3 and 4. Anyway, could someone help me for exercise5? Perhaps I must pass exercise 5 before autograding can give marks for exercises 3 and 4.
There is no problem with the grader. This most likely means that there are bugs in both your section 3 and section 4 code. Since the exercise 3 passes the local tests, the nature of your bug must be some form of “hard-coding” by making assumptions that happen to match the test case in the notebook. In the case of exercise 4, you even fail the tests in the notebook, so the bug there should be more obvious.
@paulinpaloalto Received with thanks. I passed section 3 and section 4. I believe I have not not hard-coded anything on section 3 and section 4. I got all tests passed for section 3 and section 4 as shown in the attached screens. However, I still got 0/10 marks for section 3 and section 4. If there is no bug in the grader, then please help me understand how I can improve my codes and learn from section 3 and section 4 with these screen results from the system.
Passing the tests in the notebook does not prove your code is perfect. The grader uses different tests.
The most common bug is directly referencing the global variable word_embeddings
from the local scope of your get_country
function. That works in the notebook, because that happens to be the value passed as the embeddings
argument to that function in the notebook. But it is a bug, because what if the grader passes a different value?
Ah, you are right. I had revised the use of word_embeddings. I passed.