C1W4_E4-linear_activation_forward 'not enough values to unpack'

In the lab ‘Building_your_deep_neural_network_step_by_step’, I am attempting exercise 4 which has you activate a layer based on wether the call is for a linear or a ReLu activation.

I am using np.dot() to multiply W with the input layer A, plus the bias b but get this error that I don’t know how to trouble shoot. It’s expecting a vector of size 2 but it’s getting 1.

 ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-24-e7497d29958b> in <module>
      1 t_A_prev, t_W, t_b = linear_activation_forward_test_case()
      2 
----> 3 t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "sigmoid")
      4 print("With sigmoid: A = " + str(t_A))
      5 

<ipython-input-23-22d833a4ad11> in linear_activation_forward(A_prev, W, b, activation)
     22         # A, activation_cache = ...
     23         # YOUR CODE STARTS HERE
---> 24         Z, linear_cache = np.dot(W,A_prev)+b
     25         A, activation_cache = sigmoid(Z)
     26         # YOUR CODE ENDS HERE

ValueError: not enough values to unpack (expected 2, got 1)

Does this link on tuple unpacking help?

1 Like

The link shared by @balaji.ambresh has a good explanation. Please note that np.dot() does not produce a tuple. Instead, it returns a single NumPy array. This is why Python throws a ValueError: not enough values to unpack (expected 2, got 1).

Yes, the problem is that you are not supposed to be directly doing the dot product there. You already wrote a function that performs the “linear activation” step, right? And note that it returns two values.

1 Like

This is true.

I called the function that was developed in an earlier step and it works because that one does return two values not one. Thank you.

1 Like