# DLS Course 4 [Week 1] Exercise 3 - conv_forward - wrong Z mean

I get a wrong Z mean and I have been looking a solution for a while. I would appreciate your support.
I have checked the formula for slicing on the input volume with a loop over the output volume in the forum and seems ok.
I have also the right dimensions after slicing: (3, 3 ,4).

I get the following error:

Z’s mean =
-0.010416666666666666
Z[0,2,1] =
[-2 8 0 3 0 0 0 0]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
First Test: Z’s mean is incorrect. Expected: 0.5511276474566768

First Test: Z[0,2,1] is incorrect. Expected: [-2.17796037, 8.07171329, -0.5772704, 3.36286738, 4.48113645, -2.89198428, 10.99288867, 3.03171932]
Your output: [-2 8 0 3 0 0 0 0]

AssertionError Traceback (most recent call last)
in
15
16 conv_forward_test_1(z_mean, z_0_2_1, cache_0_1_2_3)
—> 17 conv_forward_test_2(conv_forward)

~/work/release/W1A1/public_tests.py in conv_forward_test_2(target)
117 [-0.47552486, -0.16577702, -0.64971742, 1.63138295]])
118
→ 119 assert np.isclose(Z_means, expected_Z), f"Wrong Z mean. Expected: {expected_Z} got: {Z_means}"
120 assert np.allclose(cache_conv[0][1, 2], expected_conv), f"Values in Z are wrong"
121

AssertionError: Wrong Z mean. Expected: -0.5384027772160062 got: -0.06314102564102564

click my name and message your notebook as an attachment.

The most common mistakes are not handling the “stride” correctly. That is in the input space and is used to calculate `vert_start` and `horiz_start`.

But it’s also odd that you get integer values for `Z[0,2,1]` instead of floating point values. I’ve never seen that error before and it seems like an important clue about the nature of what is wrong. E.g. are you using the index values directly instead of the actual array values produced from those index values in your computations?

1 Like

@paulinpaloalto
I have changed one mistake when reading your remark, the dtype was in “int” when initializing Z:
Z = np.zeros((m, n_H, n_W, n_C), dtype=float)

I still have an error:

Z’s mean =
0.009946778332892073
Z[0,2,1] =
[-2.17796037 8.07171329 -0.5772704 3.36286738 0. 0.
0. 0. ]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
First Test: Z’s mean is incorrect. Expected: 0.5511276474566768

First Test: Z[0,2,1] is incorrect. Expected: [-2.17796037, 8.07171329, -0.5772704, 3.36286738, 4.48113645, -2.89198428, 10.99288867, 3.03171932]
Your output: [-2.17796037 8.07171329 -0.5772704 3.36286738 0. 0.
0. 0. ]

AssertionError Traceback (most recent call last)
in
15
16 conv_forward_test_1(z_mean, z_0_2_1, cache_0_1_2_3)
—> 17 conv_forward_test_2(conv_forward)

~/work/release/W1A1/public_tests.py in conv_forward_test_2(target)
117 [-0.47552486, -0.16577702, -0.64971742, 1.63138295]])
118
→ 119 assert np.isclose(Z_means, expected_Z), f"Wrong Z mean. Expected: {expected_Z} got: {Z_means}"
120 assert np.allclose(cache_conv[0][1, 2], expected_conv), f"Values in Z are wrong"
121

AssertionError: Wrong Z mean. Expected: -0.5384027772160062 got: -0.07609363820649982

The loop range for the inner most loop for channels is incorrect.

This document comment should help pick the number of channels correctly:

A_prev – output activations of the previous layer,
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W – Weights, numpy array of shape (f, f, n_C_prev, n_C)

Remember that the number of output channels is equal to the number of filters which is learned by the conv layer.

Thank you @balaji.ambresh ,
I have well understood my mistake. I did not pick the right number of channels. It works now!
Thanks a lot!

I think I’ve accounted for the stride in my code. But still I’m getting the assertion error saying “wrong mean”. I may be doing it wrong. Can someone help me?

Notice that your Z[0,2,1] values are correct up to the point where they all start being zero. So that probably means that you are not managing the “channel” index values correctly. Here are my outputs on that first test with some added print statements:

``````stride 2 pad 1
New dimensions = 3 by 4
Shape Z = (2, 3, 4, 8)
Shape A_prev_pad = (2, 7, 9, 4)
Z[0,0,0,0] = -2.651123629553914
Z[1,2,3,7] = 0.4427056509973153
Z's mean =
0.5511276474566768
Z[0,2,1] =
[-2.17796037  8.07171329 -0.5772704   3.36286738  4.48113645 -2.89198428
10.99288867  3.03171932]
cache_conv[0][1][2][3] =
[-1.1191154   1.9560789  -0.3264995  -1.34267579]
First Test: All tests passed!
``````

Notice that 4 of your output values are correct and then 4 are zero. Also notice that the input has 4 channels, but the output has 8 channels. So you may have used the input channel dimension as the output channel dimension.

Here’s a post that explains in words how all the loops work in `conv_forward`. If my theory above doesn’t play out, then please have a look at that thread and see if it sheds any additional light.

Thanks. Choosing the output channel dimension solves it for me.

I also get a “Z mean error”, however with another value of Z mean:

shape A_prev_pad = (2, 7, 9, 4)
shape Z = (2, 3, 4, 8)
Z0000 = 0.7820091833544316
Z1237 = 1.5001809877403456
Z’s mean =
0.18450010825895893
Z[0,2,1] =
[-1.28877104 7.13775675 -5.93162373 4.90241386 1.0448365 0.35461314
-0.26936564 -9.44878922]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
First Test: Z’s mean is incorrect. Expected: 0.5511276474566768
Your output: 0.18450010825895893 . Make sure you include stride in your calculation

First Test: Z[0,2,1] is incorrect. Expected: [-2.17796037, 8.07171329, -0.5772704, 3.36286738, 4.48113645, -2.89198428, 10.99288867, 3.03171932]
Your output: [-1.28877104 7.13775675 -5.93162373 4.90241386 1.0448365 0.35461314
-0.26936564 -9.44878922] Make sure you include stride in your calculation

I have included stride in the formula to calculate n_H and n_C, and in the for-loops to calculate vert_start and horiz_start.

Further I initialized Z this way: Z = np.zeros((m, n_H, n_W, n_C), dtype=float).

What do I do wrong?

If you mean that the range of the loops includes the stride, that is your mistake. The striding happens in the input space, but the loops are over the output space (the dimensions of Z) and you must touch every point in the output space. The skipping caused by the stride happens only in the input space.

Did you follow the link I gave earlier in this thread? If my explanation above is not enough to get you to a correct solution, please read this post.

I use the stride to calculate the vert_start and horiz_start (and indirectly vert_end and horiz_end), like the post you mention says:
multiply stride to index h for vert_start
add filter height f for vert_end

I don’t use the stride to determine the range. I am looping over m, n_H, n_W and n_C, so I think I am touching every point in the output space.

Are you sure your notebook is in a consistent state? Try:

1. Kernel → Restart and Clear Output
2. Save
3. Cell → Run All

Then check the test results for the `conv_forward` cell. Are they still incorrect? If so, it’s also worth checking the shapes of everything. Here’s my output with some added print statements:

``````stride 2 pad 1
New dimensions = 3 by 4
Shape Z = (2, 3, 4, 8)
Shape A_prev = (2, 5, 7, 4)
Shape A_prev_pad = (2, 7, 9, 4)
Z[0,0,0,0] = -2.651123629553914
Z[1,2,3,7] = 0.4427056509973153
Z's mean =
0.5511276474566768
Z[0,2,1] =
[-2.17796037  8.07171329 -0.5772704   3.36286738  4.48113645 -2.89198428
10.99288867  3.03171932]
cache_conv[0][1][2][3] =
[-1.1191154   1.9560789  -0.3264995  -1.34267579]
First Test: All tests passed!
New dimensions = 9 by 11
Shape Z = (2, 9, 11, 8)
Shape A_prev = (2, 5, 7, 4)
Shape A_prev_pad = (2, 11, 13, 4)
Z[0,0,0,0] = 1.4306973717089302
Z[1,8,10,7] = -0.6695027738712113
New dimensions = 2 by 3
Shape Z = (2, 2, 3, 8)
Shape A_prev = (2, 5, 7, 4)
Shape A_prev_pad = (2, 5, 7, 4)
Z[0,0,0,0] = 8.430161780192094
Z[1,1,2,7] = -0.2674960203423288
New dimensions = 13 by 15
Shape Z = (2, 13, 15, 8)
Shape A_prev = (2, 5, 7, 4)
Shape A_prev_pad = (2, 17, 19, 4)
Z[0,0,0,0] = 0.5619706599772282
Z[1,12,14,7] = -1.622674822605305
Second Test: All tests passed!
``````

Thanks, I didn’t select the right shape for a_prev_pad. I fixed it and it works.

1 Like

That’s great news that you found the solution. There is plenty of interesting material ahead. Onward!