Course 1 Week 4 Exercise 9

Hi everyone,

I am quite stuck at this portion.

—> 41 current_cache = linear_activation_backward(dAL, caches, “sigmoid”)

Error: TypeError: bad operand type for unary -: ‘tuple’

Any help would be much appreciated.

You must not be showing us the entire exception trace. There is no minus sign in that line of code.

TypeError Traceback (most recent call last)
in
1 t_AL, t_Y_assess, t_caches = L_model_backward_test_case()
----> 2 grads = L_model_backward(t_AL, t_Y_assess, t_caches)
3
4 print("dA0 = " + str(grads[‘dA0’]))
5 print("dA1 = " + str(grads[‘dA1’]))

in L_model_backward(AL, Y, caches)
39 # grads[“db” + str(L)] = …
40 # YOUR CODE STARTS HERE
—> 41 current_cache = linear_activation_backward(dAL, caches, “sigmoid”)
42 dA_prev_temp, dW_temp, db_temp = caches
43 grads[“dA” + str(L-1)] = caches[0]

in linear_activation_backward(dA, cache, activation)
32 # dA_prev, dW, db = …
33 # YOUR CODE STARTS HERE
—> 34 dZ = sigmoid_backward(dA, activation_cache)
35 dA_prev, dW, db = linear_backward(dZ, linear_cache)
36 # YOUR CODE ENDS HERE

~/work/release/W4A1/dnn_utils.py in sigmoid_backward(dA, cache)
74 Z = cache
75
—> 76 s = 1/(1+np.exp(-Z))
77 dZ = dA * s * (1-s)
78

TypeError: bad operand type for unary -: ‘tuple’

Thats the entire error message.

That’s more like it. So that means that the activation_cache value you are passing down to sigmoid_backward is wrong. It should be just the “activation cache” which is a single value Z. You must be passing either the whole cache (which is a 2-tuple) or the linear cache (which is a 3 tuple).

It is a general principle of debugging that a perfectly correct subroutine can still throw an error, if you pass it bad arguments. You can examine the logic in sigmoid_backward by opening the file dnn_utils.py in order to understand what is going wrong, so that you can track backwards up the call stack to figure out what you did wrong.

Thank you for the advice. I have since made changes but now faced with the issue of shape.

Error: Wrong shape for variable dA0. Error: Wrong shape for variable dW1. Error: Wrong shape for variable db1. Error: Wrong output for variable dA1. Error: Wrong output for variable dW2. Error: Wrong output for variable db2. Error: Wrong output for variable dA0. Error: Wrong output for variable dW1. Error: Wrong output for variable db1. 1 Tests passed 2 Tests failed

AssertionError Traceback (most recent call last)
in
9 print("db2 = " + str(grads[‘db2’]))
10
—> 11 L_model_backward_test(L_model_backward)

~/work/release/W4A1/public_tests.py in L_model_backward_test(target)
442 ]
443
→ 444 multiple_test(test_cases, target)
445
446 def update_parameters_test(target):

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(’\033[92m’, success," Tests passed")
141 print(’\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for L_model_backward. Check your equations and avoid using global variables inside the function.

When you have a shape mismatch, the best debugging strategy is to work out the “dimensional analysis”. Look at the test case. What are the shapes of the inputs? Based on that, what shape should dA0 be? Now compare that to the shape that your code produces. Now you have to figure out why it turned out incorrectly. Actually in the case of back propagation, it makes more sense to start with dA2 or whatever the output layer is. You start at the output and work your way backwards.