Course 1: Week 4, Exercise 9 - L_model_backward

I keep getting this error when I try to test out my L_model_backward(AL, Y, caches) function.

IndexError Traceback (most recent call last)
in
1 t_AL, t_Y_assess, t_caches = L_model_backward_test_case()
----> 2 grads = L_model_backward(t_AL, t_Y_assess, t_caches)
3
4 print("dA0 = " + str(grads[‘dA0’]))
5 print("dA1 = " + str(grads[‘dA1’]))

in L_model_backward(AL, Y, caches)
59 # YOUR CODE STARTS HERE
60 curren_cache = caches[l]
—> 61 dA_prev_temp, dW_temp, db_temp = linear_activation_backward(dA_prev_temp, current_cache, activation=“relu”)
62 grads[“dA” + str(l)] = dA_prev_temp
63 grads[“dW” + str(l + 1)] = dW_temp

in linear_activation_backward(dA, cache, activation)
22 # dA_prev, dW, db = …
23 # YOUR CODE STARTS HERE
—> 24 dZ = relu_backward(dA, activation_cache)
25 dA_prev, dW, db = linear_backward(dZ, linear_cache)
26 # YOUR CODE ENDS HERE

~/work/release/W4A1/dnn_utils.py in relu_backward(dA, cache)
54
55 # When z <= 0, you should set dz to 0 as well.
—> 56 dZ[Z <= 0] = 0
57
58 assert (dZ.shape == Z.shape)

IndexError: boolean index did not match indexed array along dimension 0; dimension is 3 but corresponding boolean dimension is 1

I think the problem lies in the loop. I’m just not sure whether the input parameters of
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads[“dA” + str(l + 1)], current_cache, activation=“relu”) is correct. Please help

Hi @Veeeee, and welcome! First you seem to have a typo on line 60 of the traceback. Also, check your arguments in the call of the linear_activation_backward function (line 61 of the traceback). You need to access the grads dictionary for the first argument, which should be a function of the loop index l (indexing the layer as the routine progresses backward through the network). That should get you started with the debugging. This assignment is well worth the time and effort. It is critical for a solid understanding of deep learning. In other words, it is foundational.

Yup got it. Thank you