Course 1, Week 4 Assignment 2 - 2 layer model

While executing the 2 layer model in the 2nd assignment, I end up getting the following error message

I can’t seem to figure out where the error might be. The next part of the assignment (L-layer model) runs fine with almost similar functions.

In order to debug the problem you need to chase it following its execution path. For example, looking at the error you’re getting it is clear that W1 is wrong.

So, you need to check how W1 is being calculated e.g. it is linked with update_parameters function. Ok, what are the inputs to that function? How are they calculated? Are they inputs to the two_layer_model function?

For example, grads is calculated upon the results of linear_activation_backward, you need to check what are the inputs to that function too, are they calculated? are they given inputs?

If you follow the execution path you will be able to chase the error.

Maybe before doing all the above you could check if by any chance you have hardcoded any value such as the learning_rate. If you are confident you haven’t done it then follow the execution flow by adding prints to what’s being calculated.

Check your output against what it is expected according to the public tests, for example, W1 should be:

'W1': np.array([[ 0.01624965, -0.00610741, -0.00528734, -0.01072836,  0.008664  ,
                                 -0.02301103,  0.01745639, -0.00760949,  0.0031934 , -0.00248971],
                                [ 0.01462848, -0.02057904, -0.00326745, -0.00383625,  0.01138176,
                                 -0.01097596, -0.00171974, -0.00877601,  0.00043022,  0.00584423],
                                [-0.01098272,  0.01148209,  0.00902102,  0.00500958,  0.00900571,
                                 -0.00683188, -0.00123491, -0.00937164, -0.00267157,  0.00532808],
                                [-0.00693465, -0.00400047, -0.00684685, -0.00844447, -0.00670397,
                                 -0.00014731, -0.01113977,  0.00238846,  0.0165895 ,  0.00738212]])

Hi, I’m using np.random.seed(1), but my W1 array is different than the expected:

‘W1’: array([[ 0.01624345, -0.00611756, -0.00528172, -0.01072969, 0.00865408,
-0.02301539, 0.01744812, -0.00761207, 0.00319039, -0.0024937 ],
[ 0.01462108, -0.02060141, -0.00322417, -0.00384054, 0.01133769,
-0.01099891, -0.00172428, -0.00877858, 0.00042214, 0.00582815],
[-0.01100619, 0.01144724, 0.00901591, 0.00502494, 0.00900856,
-0.00683728, -0.0012289 , -0.00935769, -0.00267888, 0.00530355],
[-0.00691661, -0.00396754, -0.00687173, -0.00845206, -0.00671246,
-0.00012665, -0.0111731 , 0.00234416, 0.01659802, 0.00742044]])

Hi Valery,
I stumbled upon a similar problem. I could solve it be using the provided initialize_parameters() function instead of filling the parameters dict by myself (eg. with np.random.randn()):

Hope that helps!

Hello Guys!

For the ones that are getting this error, this is my experience:

I was using the function to initialize the parameters from the previous assignment (def initialize_parameters_deep(layer_dims)). The thing is that you actually have to use the implementation that they provide to you (def initialize_parameters(n_x, n_h, n_y))

Now everything is working as expected! =)

Yes, that’s what they tell us to do in the instructions. The ironic thing is that the “deep” version of the initialization that they give us here is a more sophisticated algorithm and it actually generates better results in the 2 layer case. But the grader is checking that the results match what it expects, so better is not considered a good thing in that context. :laughing: