Exercise 5 - conv_backward

I got this error

dA_mean = 1.4524377775388075
dW_mean = -0.30435376426377925
db_mean = 7.839232564616838

AssertionError Traceback (most recent call last)
23 assert db.shape == (1, 1, 1, 8), f"Wrong shape for db {db.shape} != (1, 1, 1, 8)"
24 assert np.isclose(np.mean(dA), 1.4524377), “Wrong values for dA”
—> 25 assert np.isclose(np.mean(dW), 1.7269914), “Wrong values for dW”
26 assert np.isclose(np.mean(db), 7.8392325), “Wrong values for db”

AssertionError: Wrong values for dW

I don’t understand what \mathrel{+} means. Can anyone explain me, please? I think the error come from this, because I wrot for dW[:,:,:,c] += a_slice * dZ[i, h, w, c].
When I wrote \mathrel{+} I got error.

I am confused. Why did you use “\mathrel{+}” if you don’t know what it does?

It is written in the example explaination, so I used it and get the another error.

But the error which I have mantioned do not actually connected with mathrel.

“\mathrel{+}" is markup text, not a programming language statement. Maybe there was a problem with the page rendering in the instructions.

The assertion tells you your dW values are incorrect.

1 Like

thank you for your response. Here is my codes for calculating dA
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))

A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)

a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]

dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]

could not find the wrong part.

Hey Lilith!
What you wrote looks correct. You probably made a mistake somewhere else (e.g., in loops (tabulation or smth else), or slicing)

1 Like

I ran into this same issue! I found out this was due to the dZ horizontal/vertical order not matching the matrix it is being multiplied by. Because the input dZ is[i, h, w, c] the horizontal dimensions need to go first when slicing the padded activation.

dZ is also used to calculate dA so having any mismatch there could cause issues later on too because the conv_backward() test uses a (10, 4, 4 3) activation as the input and it is tested using np.mean() which means transposing the matrix by mixing up the axes will not cause the test to fail.