Course 1 Week 3 Forward Propagation error

Hi,
I’m stuck with this exercise. I’ve been checking, debugging, reviewing the theory and I don’t see where is the error. I’m getting this message:

A2 = [[0.5002307 0.49985831 0.50023963]]
Error: Wrong output for variable 0.
Error: Wrong output for variable Z1.
Error: Wrong output for variable A1.
Error: Wrong output for variable Z2.
Error: Wrong output for variable A2.
2 Tests passed
1 Tests failed

But to me the formulas I use are correct:

{moderator edit - solution code removed}

I’ve been debugging a little more to isolate the problem: I’m getting
Z1 = [[-0.00616586 0.00206261 0.0034962 ]
[-0.05229879 0.02726335 -0.02646868]
[-0.0200999 0.00368691 0.02884556]
[ 0.02153008 -0.01385323 0.02600471]]

and in the test code I see the expected Z1 is:
expected_Z1 = np.array([[ 1.7386459 , 1.74687437, 1.74830797],
[-0.81350569, -0.73394355, -0.78767559],
[ 0.29893918, 0.32272601, 0.34788465],
[-0.2278403 , -0.2632236 , -0.22336567]])

So my problem start right at the begining. How can be the expected Z1 so big if W1 it is initialized with random values * 0.01 and b1 to 0?
Z1 = np.dot(W1, X) + b1 is expected to be small, right? What am I not understanding?

Hi @Edu4rd , if the problem is there from the beginning, can you check whether the initialization of the parameters is correct? It might be that you hard-coded a value, or that the initialization through the parameters function has got an error or typo in it.

1 Like

Note that in the test case for forward propagation, the inputs are just provided for you as part of the test case. There is no guarantee that they came from the “standard” initialization routine. Your code looks correct, so if even Z1 is already wrong I would suspect you are not using the actual parameter values that are being passed to your function. Worth a look anyway …

Also note that the X and b1 values are not small. I added some print statements and here’s what I see:

X = [[ 1.62434536 -0.61175641 -0.52817175]
 [-1.07296862  0.86540763 -2.3015387 ]]
b1 = [[ 1.74481176]
 [-0.7612069 ]
 [ 0.3190391 ]
 [-0.24937038]]

Are you maybe calling the “init” routine from inside your forward_propagation code? If so, that would be a mistake and might explain what is going on …

1 Like

You just got it! I was calling the init function inside the forward prop. That was the error. Now it works fine! The rest was correct.
Thank you!

Eduard

Great to hear you solved it!

2 Likes

I am seeing an unusual error with forward propagation error.
Using the np.dot function to multiply W1 and X, and it seems to indicate they are the wrong data type(?)
Here’s the error message:


In case you need my lab id: dtypwdeg

Hi @sdidde , the TypeError means your data are strings (U32) instead of numbers (floats). So the best way is to check and print out W1 and X, that might help you finding out where the problem is. Hope this helps you finding the solution.

Thanks sjfischer. I got it to work!

Ok great @sdidde , good to hear you managed to solve the problem. Enjoy the rest of the course!

I got this error but could you explain it more clearly about init inside the forward prop because I see that the exercise say nothing about init or anything else, please

In one of the previous discussions on this thread, the mistake that the person had made was calling the initialization routine from inside the forward propagation routine. That is a mistake: the initialization is called before you call forward propagation. Of course there are lots of other mistakes that are possible here. If the above doesn’t shed any light, then please show us the actual error output that you are getting.

I do the steps: take param from parameters, follow the formula to find A2 but the result is
image
but not the expected

That all sounds right. There aren’t that many moving parts here. Are you sure you included the bias terms for the computations of Z1 and Z2? And that you used tanh for layer 1 and sigmoid for layer 2? Other than that, there shouldn’t be much to go wrong.

I added print statements to show the intermediate values of Z1, A1 and Z2. There’s what I get for that test case:

Z1 = [[ 1.7386459   1.74687437  1.74830797]
 [-0.81350569 -0.73394355 -0.78767559]
 [ 0.29893918  0.32272601  0.34788465]
 [-0.2278403  -0.2632236  -0.22336567]]
A1 = [[ 0.9400694   0.94101876  0.94118266]
 [-0.67151964 -0.62547205 -0.65709025]
 [ 0.29034152  0.31196971  0.33449821]
 [-0.22397799 -0.25730819 -0.2197236 ]]
Z2 = [[-1.30737426 -1.30844761 -1.30717618]]
A2 = [[0.21292656 0.21274673 0.21295976]]
Z1 = [[ 1.7386459   1.74687437  1.74830797]
 [-0.81350569 -0.73394355 -0.78767559]
 [ 0.29893918  0.32272601  0.34788465]
 [-0.2278403  -0.2632236  -0.22336567]]
A1 = [[ 0.9400694   0.94101876  0.94118266]
 [-0.67151964 -0.62547205 -0.65709025]
 [ 0.29034152  0.31196971  0.33449821]
 [-0.22397799 -0.25730819 -0.2197236 ]]
Z2 = [[-1.30737426 -1.30844761 -1.30717618]]
All tests passed!

Do your Z1 values agree? Where do things go off the rails?

I do add bias to z1 and z2 after np.dot and tanh then sigmoid
my value of Z1 is too small
image

Yes, those are radically different values. Are you sure that you aren’t calling initialize_parameters within your forward_propagation code? Or maybe you are not referencing the actual parameter value passed in for the X and are getting some global variable instead.

thank you very much for your support, I figure it out, the variables is nearly the same to previous run and the function run the previous but the current

It’s great news that you found the solution. Yes, they do the same test twice: once directly in that cell and then again when they call the test function from public_tests.py. I’m not sure why they duplicate the same test twice. It would have been better SQA methodology have more diversity of testing.

Yes but the variable between cases are really confusing !