I finished the assignment (everything works and I get “All tests passed!” etc.), but when I submit it it gives me 0 and the grader output is this:
[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Planar_data_classification_with_one_hidden_layer.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
[ValidateApp | ERROR] Timeout waiting for execute reply (30s).
[ValidateApp | ERROR] Interrupting kernel
Success! Your notebook passes all the tests.

I tried to submit several times in case this is a random issue (I also tried refreshing the page before submitting, or restarting the kernel and re-runing everything before submission, etc.), but this keeps happening.
What should I do?


If you added code that prints a lot of debugging data into the notebook, please remove those statements. This can make the grader unhappy.

There aren’t any print commands in my code. In general I there is no added code other than what was required (i.e. my code in the notebook is always exactly how many many lines of code the instructions said it should take).

Given that your code works in the notebook and that the grader also says “All tests passed”, but then gives 0 points, the theory would be that something is corrupted about your notebook. E.g. did you work on it with any tools other than the course website?

In any case, we probably need to look at your notebook in order to make progress. Please check your DMs for a message from me about how to proceed with that.

1 Like

If you have the patience to try another experiment, one thing to try would be to get a fresh copy of the notebook by using this thread. Then carefully “copy/paste” over just your code from the “YOUR CODE HERE” sections and try submitting again.

I resolved it now. It wasn’t anything in my code or anything corrupted in the notebook. The issue was that the “optional exercise” at the bottom of the note book (“6 - Tuning hidden layer size (optional/ungraded exercise”) was taking a bit of time to run, which failed the grader for some reason… (it does not involve any code I added, the notebook just contains a cell that is optional to run).
I removed 20 and 50 from the hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] and submitted again, and it resolved the issue.

Here’s a screenshot of the relevant cell in case you want to fix this problem:

(the cell originally had 20 and 50 in hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]).


I don’t think that was the cause of error. There had to be something else, either with your code or with the grader. But this 20 and 50 didn’t cause any error.

Removing 20 and 50 was definitely the only difference between the submissions where the grader failed (several times) and the one where it gave me 100. No other differences (same code, same notebook).
The grader output when it failed (see my first message) clearly says that there was some timeout and that the notebook passed all the tests. Then, removing the two longest iteration prevented the timeout.
To be clear, I am not claiming that what needs to be done to prevent this bug is remove the 20 & 50 as I did… This was just a way to bypass the problem and get graded. The thing to fix would probably be the grader, making it not fail you due to timeout in an ungraded cells, if all the tests pass. Either way, the cause is the timeout due to the long run in the “optional exercise” cell.

Hi, Tal.

My first reaction to your problem was exactly the same as Saif’s, but then I did some experimentation and was able to reproduce your results. The reason Saif and I doubted this initially is that it is a completely new problem. The Week 3 assignment has looked this way for literally years at this point and it’s never been a problem until yesterday. So they must have changed something about the configuration of the grader, e.g. the limitations on the cpu that it can use. But the really interesting thing is that I added some other cells to make the notebook run longer and I can’t reproduce the effect that way. The other thing that is odd about this is that the optional section at the end is not graded, so there is no reason for the grader to run that code. Of course the real high level problem here is that the grader is a black box. Even the people at DLAI who I have asked about how it works seem to have no clue. It’s a Coursera package that they are using. I assumed that the grader only calls the graded functions, although of course the call graph you get may include other functions, but there’s no way any of that optional code would get called.

I will raise the alarm with the course staff and other mentors once I try a few more experiments to try to get more of a handle on what the scope of the problem is. That’s the other surprising thing: why aren’t we seeing more alarm on the forums about this? It must be that not that many students are taking DLS these days and the problem must just have happened yesterday for the first time.



To close the loop here, the Course Staff have implemented your fix to remove the 50 value from the test cases in the optional section. But even before they did that, I have not seen any other instances of this issue being posted on the forum. Thanks for your careful analysis here!

We got another one in week 4 of this course here.

1 Like

Not able to login via this link.

1 Like

Which link are you talking about?

Got it … let me try … thank you

Please do not try anything otherwise you will mess up everything. This is a separate assignment and it is not recommended to change anything by yourself. Course staff will do that for you (or inform you). For now, please wait.