Hi,
I was working on my assignment for quite a while but find some assertion problem in the exercise 8. I have tried and checked everything but cannot find the cause of error (Some sort of ASSERTION error).
The AssertionError shows that the shape of A2 (output of the forward propagation function) is not equal to (1, X.shape[1]).
Have you hard coded any shape, especially in Exercise 2 - layer_sizes? If no, and you passed all the previous tests, then try to restart the Kernel. Kernel>Restart or Restart and Clear Output. Wait for the kernel to restart, and then you can rerun your code and see if error persist.
I have restarted the kernel and then tried again, it doesn’t work.
Then I Restart and Clear Output the kernel, again passed all the previous test, but it still doesn’t work.
I am getting the error again.
Please check the arguments passed to initialize_parameters() are correct and in the order as defined in the function. As this function returns the W and b used in the calculation of Z in the forward_propagation() function.
Yes, Kin has made the key point that I was going to chime in with also: just because the error is thrown in the forward_propagation function, that does not mean that is where the mistake is. A perfectly correct function can still throw errors if you pass it bad parameters. So you have to track backwards to figure out where the mistake really is. It must be in your nn_model logic somewhere.
But to start the debugging, when you have a shape mismatch, the first question to answer is “ok, what shape is it?” Put a print statement in forward_propagation right before the line that “throws”:
I have used the print statement and gets the following results:
A2 = [[0.21292656 0.21274673 0.21295976]]
A2.shape = (1, 3)
X.shape = (2, 3)
Even all the tests were also passed. I think that initial parameters are fine because if they have a problem then I would not be able to get the value of A2 correctly.
I have also used the print(f"X.shape = {X.shape}") statement in the ex.8 and got the following:
X.shape = (2, 3)
A2.shape = (4, 3)
X.shape = (2, 3)
Now I think the problem is with A2 due to which I am having an assertion error. But I do not know that how did it get that way?? I also have cross checked all other things multiple times. But still don’t know…
The point is that the bug is not in forward_propagation, right? The bug is in nn_model somewhere. So that test case you show is the test case for forward_propagation. Now what do you see when you run the nn_model_test cell? Is that where it shows the A2 shape as (4, 3)? That is clearly wrong, since A2 should have only one row (one output neuron).
Are you sure that your update_parameters function passes its test cases? Maybe that is causing the shapes of W2 or b2 to be wrong and that causes the issue. One interesting thing to do would be to leave the shape of A2 print in the propagate routine and add a print to nn_model that shows the iteration count at the beginning of the “for” loop. Does the assertion failure happen on iteration 0 or iteration 1? That would be an interesting clue.
I am sure that the bug is not in the forward propagation and neither in the update_parameters. I have passed all the tests before.
But I do not understand that what do you meant by “add a print to nn_model” while I am still defining the nn_model.
In the nn_model(), if the initialize_parameters() was given arguments incorrectly, then, the size returned will cause problem when the variable parameters is passed to forward_propagation(). Please check the arguments pass to initialize_parameters() is in the order of (n_x, n_h, n_y)
@Kic Thanks for the tip. It was helpful and It worked. But now I am getting too much long output, I have restarted the kernel also but its okay, I am having no issue with that.
Thanks for responding.
Once you get past a given problem, it’s a good idea to remove or comment out the print statements you added for debugging. Having print statements in an “inner” loop can generate a lot of output when you run the training for thousands of iterations and that may cause problems with the grader if the memory image of the notebook gets too large. You can just click “Kernel → Restart and Clear Output” and rerun things after removing the print statements to release all the extra memory.