C3 and C4 Lab Grading Issues

While each part of my lab work in 4 returns matching results to the expected results, and pass the unittest, the automated Grading system in Coursera states that some of the blocks are wrong. What is strange is that many of these blocks one just have to pass a function with the appropriate parameters. What is even stranger, the blocks with more intricate code passed. I’ve provided an example that the grading system states that I failed, but the unittest stated I passed

I’m not an NLP mentor, so I probably can’t directly help here. But it is a very common thing for unit tests to pass and then the grader to fail. The most common way to provoke this is referencing the global variable that happens to get passed as a parameter within the scope of your function, instead of referencing the relevant formal parameter. It works in the notebook, but then the grader passes a different variable and “Kaboom!”

In other words, the thing to look for is ways in which you have written the code that is not general.

3 Likes

I see what you mean. I was using en_embeddings_subset which is a global variable

1 Like

Great! Glad to hear that clue was useful. Other general things to look out for are hard-coding sizes or shapes of the input objects. Look for any assumptions you are making that don’t generalize …

1 Like

Thank you very much, now the code worked and passed submission. THANK YOU!!!

1 Like

+1 @paulinpaloalto. And the reason global variables break the grader is that it uses the Python exec() function to dynamically load and execute the submitted graded code block. Only the block and the parameters passed into it are in context inside the grader…nothing else from the original notebook from which the submission occurred.

https://docs.python.org/3/library/functions.html?highlight=exec#exec

1 Like