I am doing assignment 1 Initialization for week1 course: “Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and…”
While verifying tests for function initialize_parameters_he(layers_dims), found out that
gives different results when run the second time causing to fail the tests.
What is the solution ?
I tried getting the latest version from the repo but didn’t help.
Appreciate any help !
Maybe use “Kernel → Restart & Clear Output” before you run the notebook.
The setting of the seed there is in the template code and it’s inside the function itself, so the results should be the same everytime regardless of what else is going on in the notebook. Are you just saying that you don’t pass the test case there? Note that the way random seeds work is that they start the sequence at a particular point, but you then have to do the operations in the same order that they specify in order to get the answers that the grader expects.
Tried but the same problem unfortunately.
Maybe we need to look at your code and figure out what is wrong. We can’t do that in public, but there are ways to do that. Please check your DMs for a message from me about how to proceed with that.
Yes np.random.seed(3) in the template code should help generate the same sequence.
I copied the code from initialize_parameters_random in initialize_parameters_he and made changes for he initialization and it worked. Certainly need to find out why.
But I am relieved that it worked as expected and all tests pass. Otherwise, I was wasting time for such a small assignment.
Thanks for looking into it and reply.
To close the loop on the public thread, the current code in the notebook works, passes the tests and passes the grader. There was a previous version that was commented out that had a couple of mistakes in it, including using the “rand” function instead of “randn” for the random number generation.