Issues with linear_activation_backward

I am having issues with linear_activation_backward function while evaluating dA_prev, dW, db. I don’t understand the syntax error per say, I am unsure if it has anything to do with my logic or code error, any help will be greatly appreciated! Here is the sample of my error.

t)
in
6 print("With sigmoid: db = " + str(t_db))
7
----> 8 t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “relu”)
9 print("With relu: dA_prev = " + str(t_dA_prev))
10 print("With relu: dW = " + str(t_dW))

in linear_activation_backward(dA, cache, activation)
23 # YOUR CODE STARTS HERE
24 dZ= relu_backward(dA, activation_cache)
—> 25 dA_prev, dW, db =linear_backward(dZ, activation_cache)
26
27 # YOUR CODE ENDS HERE

in linear_backward(dZ, cache)
14 db – Gradient of the cost with respect to b (current layer l), same shape as b
15 “”"
—> 16 A_prev, W, b = cache
17 m = A_prev.shape[1]
18

ValueError: not enough values to unpack (expected 3, got 1)

You are passing the wrong part of the cache value to linear_backward. They gave you the logic to split the cache into the linear_cache and the activation_cache, right? If you take a look, you’ll find that the activation cache has one entry Z and the linear cache has three entries (A, W, b). That should be a clue if you compare that to the error you are getting.

wait, I think I am lost, do you mind expounding more on what you mean by the previous response, please?

Please invest some energy in looking at how the caches work. All the information you need is there in the code, but you have to read it and understand it. They are created during forward propagation and then used during back propagation. What we get at each layer for the full cache entry looks like this:

((A, W, b), Z)

That is a 2-tuple. A “tuple” with 2 elements. The first element of the tuple is itself a 3-tuple:

(A, W, b)

The second element of the 2-tuple is just the single value Z.

Now look at the template code for linear_activation_backward. They give you this line:

linear_cache, activation_cache = cache

The input for the cache variable is the complete layer entry that I showed above. So what is in the two variables linear_cache and activation_cache after that statement?

Now look at the logic in linear_backward: it gets a cache argument. What does it do with that argument? So which of those two entries is the right one to pass to linear_backward? And which did you actually pass?

If you want to see the code for relu_backward to do the same kind of analysis, you can find it by clicking “File → Open” and then opening the python file with the utility functions.