Week 4 programming assignment 1exercise 8

When you have dA_prev, dW, db = … I know this is not assigning all 3 values to be what is in the “…”. What is it doing exactly?

It depends on the code you are writing, it can be you are assigning same value to different variables, or you are assigning the different output of a certain function to certain variables.
Hope you got it.

That does help me understand that.
I keep getting: ValueError: not enough values to unpack (expected 3, got 2). It points to the dA_prev, dW, db = … line.

I think you should read suggestions in the code comments and make sure you are following them.

The ellipses (the ...) are there as placeholders for code that you are going to fill in. In other words, you will overwrite those with your code; they do not belong in your finished assignment.

In your example, the expression (typically a function) should return three values, one for each comma-delimited term of the right-hand side. If your function on the right-hand side have only returned two values, then you have made a mistake.

It seems that linear_backward should return 3 values since at the bottom of that function it says return dA_prev, dW, db. But I keep getting:
ValueError Traceback (most recent call last)
in
1 t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()
2
----> 3 t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “sigmoid”)
4 print("With sigmoid: dA_prev = " + str(t_dA_prev))
5 print("With sigmoid: dW = " + str(t_dW))

in linear_activation_backward(dA, cache, activation)
35
36 dZ = sigmoid_backward(dA, activation_cache)
—> 37 dA_prev, dW, db = linear_backward(dZ, cache)
38
39 # YOUR CODE ENDS HERE

in linear_backward(dZ, cache)
14 db – Gradient of the cost with respect to b (current layer l), same shape as b
15 “”"
—> 16 A_prev, W, b = cache
17 m = A_prev.shape[1]
18

ValueError: not enough values to unpack (expected 3, got 2)

Have all functions passed their tests prior to Ex. 8?

The traceback indicates that the linear_backward function is the proximate cause of the problem. Specifically, there are not enough objects in the cache–there are only 2, you need 3. That leads us back to the L_model_forward function where the forward propagation values are cached. These are needed to evaluate the gradient during backward propagation. That, in turn, points back to the linear_activation_forward function which is called by L_model_forward.

So again, I would be keen to know if all of the previous functions have passed their tests.

All of a sudden none of the functions are saying that I passed. But, there is no error message on any of them. It’s like it’s not testing my code at all. That being said, I do remember all of the tests passing before.

Examine the output that your functions are producing versus the “Expected Output.” That will give some clues as to what might be going wrong.

All of the previous functions say the same thing as the expected output followed by “All tests passed.”

Great. To be clear you are back to your original post with the traceback produced by Exercise 8?

Yes. It says:
ValueError Traceback (most recent call last)
in
1 t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()
2
----> 3 t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = “sigmoid”)
4 print("With sigmoid: dA_prev = " + str(t_dA_prev))
5 print("With sigmoid: dW = " + str(t_dW))

in linear_activation_backward(dA, cache, activation)
35
36 dZ = sigmoid_backward(dA, activation_cache)
—> 37 dA_prev, dW, db = linear_backward(dZ, cache)
38
39 # YOUR CODE ENDS HERE

in linear_backward(dZ, cache)
14 db – Gradient of the cost with respect to b (current layer l), same shape as b
15 “”"
—> 16 A_prev, W, b = cache
17 m = A_prev.shape[1]
18

ValueError: not enough values to unpack (expected 3, got 2)

OK, thanks! Note that the cache, an argument to the linear_activation_backward() function is unpacked into two separate caches in the function’s first statement: linear_cache and activation_cache.

Your traceback indicates that you are only using the activation_cache (where the A’s are stored), but not the linear_cache (where the Z’s) are stored. In backpropagation, derivatives need to be evaluated at their forward propagation solutions.

You are almost there! :+1: