Week 4 Step by step Lin_act_backwards

This looks like an error in the function test. I’m not setting the contents of the cache but the test fails to extract the required number of values in linear_backward. Despite linear_backward passing the test when it was coded.

t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()

t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “sigmoid”)
print("With sigmoid: dA_prev = " + str(t_dA_prev))
print("With sigmoid: dW = " + str(t_dW))
print("With sigmoid: db = " + str(t_db))

t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “relu”)
print("With relu: dA_prev = " + str(t_dA_prev))
print("With relu: dW = " + str(t_dW))
print("With relu: db = " + str(t_db))


ValueError Traceback (most recent call last)
1 t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()
----> 3 t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “sigmoid”)
4 print("With sigmoid: dA_prev = " + str(t_dA_prev))
5 print("With sigmoid: dW = " + str(t_dW))

in linear_activation_backward(dA, cache, activation)
36 dZ = sigmoid_backward(dA, activation_cache)
—> 37 dA_prev, dW, db = linear_backward(dZ, cache)

in linear_backward(dZ, cache)
14 db – Gradient of the cost with respect to b (current layer l), same shape as b
15 “”"
—> 16 A_prev, W, b = cache
17 m = A_prev.shape[1]

ValueError: not enough values to unpack (expected 3, got 2)

Please use the right cache variable.
cache is a tuple made of linear and activation caches.
Here’s a comment from linear_backward that can help figure things out:
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer

Thanks for the advice. I’m still working on this per your guidance. I can’t get my debugging print statements to show any output though - can you suggest what I can do? Thanks!

Placing print("something") and running the cell should print the output within the notebook execution output.
It’s possible that you are having difficulty reading the output due to the stacktrace getting printed along with your debug output.

Your cache usage for sigmoid_backward is correct. The usage for linear_backward still needs to be fixed.

Please go through the notebook again to understand what linear_cache and activation_cache stand for.

I explained this on your other thread about this: just typing new code in a function and then calling it again does nothing. You actually have to click “Shift-Enter” in the modified function cell first to get the interpreter to load your new code.

Can you elaborate further on how to fix this error? I didn’t edit the given boilerplate code. Why is the function ‘linear_backward’ using the wrong cache? I also didn’t get any error while testing the linear_backward function.

The bug is not in linear_backward: the bug is that you are calling linear_backward incorrectly. It just uses whatever cache you pass to it, right? So if the cache is wrong, where was that determined?

Thank you, I got it sorted out.