Week 3 Backward_propagation_test fails

I am getting this ouput on the backward_propagation_test:

My Output

dW1 = [[ 0.00301023 -0.00747267]
 [ 0.00257967 -0.00641287]
 [-0.00156892  0.003893  ]
 [-0.00652037  0.01618243]]
db1 = [[ 0.00176201]
 [ 0.00150994]
 [-0.00091736]
 [-0.00381422]]
dW2 = [[ 0.00078841  0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
Error: The function should return a numpy array.
Error: Wrong shape
Error: Wrong output
 0  Tests passed
 3  Tests failed

Expected output

dW1 = [[ 0.00301023 -0.00747267]
 [ 0.00257968 -0.00641288]
 [-0.00156892  0.003893  ]
 [-0.00652037  0.01618243]]
db1 = [[ 0.00176201]
 [ 0.00150995]
 [-0.00091736]
 [-0.00381422]]
dW2 = [[ 0.00078841  0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]

My output is almost the expected output but I don’t understand the errors.
Any help please?

Starting with the first error you should check the type of your outputs i.e. you should print the type of grads["dW1"], grads["db1"], etc.

Sorry, I don’t get it.
“grads” is type <class ‘dict’>, and grads[“dW1”], grads[“db1”], grads[“dW2”], grads[“db2”] are all <class ‘numpy.ndarray’> in my output too. I think that’s OK.
Even the shape I think is OK: dW1 (4, 2), db1 (4, 1), dW2 (1, 4), db2 (1, 1). The same as the expected output.
The only difference I can see between my output and the expected output is the value of dW1[1][0], dW1[1][1] and db1[1], but these differences are less than [0.00000001].
I can’t see what’s wrong?

Check that you have written correct equation. Even I encountered this error due to wrong variable used in the equation.

2 Likes

To compute dZ1 you’ll need to compute 𝑔[1]′(𝑍[1]). Since 𝑔1 is the tanh activation function, if 𝑎=𝑔1 then 𝑔[1]′(𝑧)=1−𝑎2. So you can compute 𝑔[1]′(𝑍[1]) using (1 - np.power(A1, 2))

I also got that kind of error and solved it by reading this more carefully, but I don’t actually understand it. Why is it np.power(A1, 2) and not np.power(Z1, 2)?
Thanks in advance