Week 3 excercise 8

[ValidateApp | INFO] Validating ‘/home/jovyan/work/submitted/courseraLearner/W3A1/Planar_data_classification_with_one_hidden_layer.ipynb’
[ValidateApp | INFO] Executing notebook with kernel: python3
[ValidateApp | ERROR] Timeout waiting for execute reply (30s).
[ValidateApp | ERROR] Interrupting kernel
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.

The following cell failed:


The error was:

AssertionError                            Traceback (most recent call last)
<ipython-input-20-1243cded3c7d> in <module>
----> 1 nn_model_test(nn_model)

~/work/submitted/courseraLearner/W3A1/public_tests.py in nn_model_test(target)
    292     assert output["b2"].shape == expected_output["b2"].shape, f"Wrong shape...
--> 294     assert np.allclose(output["W1"], expected_output["W1"]), "Wrong values ...
    295     assert np.allclose(output["b1"], expected_output["b1"]), "Wrong values ...
    296     assert np.allclose(output["W2"], expected_output["W2"]), "Wrong values ...

AssertionError: Wrong values for W1

It means that you have incorrectly implemented the nn_model function. Since this function is basically calling the other functions which you have implemented in this assignment till now, it is highly likely that one of those functions has an incorrect implementation. Please ensure that you have passed all the test cases for the previous functions, and even if you have, make sure you haven’t hard-coded any variables. If you are still unable to find the issue, feel free to DM your notebook as an attachment to me. You can find the instructions to download your notebook here.


{Moderator’s Edit: Solution Code Removed}

I just went over your entire kernel, and couldn’t find any issue. I copied the code for all the first 8 exercises from your kernel and they ran perfectly fine in my Coursera Labs. It looks like you have involuntarily changed something apart from the code that you were supposed to. Can you please try to get a new version of the assignment and run that? You can find the instructions for doing so in this thread. Let me know if this helps.

P.S. - Posting code publicly is against the community guidelines. If a mentor needs to take a look at your code, he/she will ask you to DM it, as I did. So, please make sure that if any solution code is involved, you only DM it to a mentor and not post it publicly.


@anon57530071 just took a look at your code and found that in the initialize_parameters function, you have set the random seed as 2, np.random.seed(2), whereas in the original assignment, random seed hasn’t being set when this function is defined. So, when your function is called np.random.randn gives results according to 2 being the random seed, whereas in the original assignment, np.random.randn gives results according to 3 being the random seed.

In simple words, remove np.random.seed(2) from your definition of initialize_parameters function and then try to run the assignment. If you still face an issue, then get a new copy of the assignment, and use that for submission. Let me know if this helps.


I get the same problem here, and all exercises before exercise 8 are correctly passed!

Notice that your cost values after 1000 and 2000 iterations are quite a bit higher than the expected values. That would suggest there is something wrong with your “update parameters” logic. But if you are sure that all your previous functions passed their tests, then the bug (whatever it is) must be in nn_model. E.g. you are passing the wrong arguments to one of the previous functions when you call it from nn_model.

This seems to be a duplicate post as this.
Actually, your case is totally different from this case. Did you read the last update by Elemento ?
It was a problem of a seed for a random number generator. You can check your code, initialize_parameters, but, I believe it’s not your case.

I am having the same problem with initialize_parameters. I would like to ask if anyone solved this problem.

Thanks in advance

What do you mean by “the same problem”? Please show us the actual error output that you are getting.