In the lab ‘Building_your_deep_neural_network_step_by_step’, I am attempting exercise 4 which has you activate a layer based on wether the call is for a linear or a ReLu activation.
I am using np.dot() to multiply W with the input layer A, plus the bias b but get this error that I don’t know how to trouble shoot. It’s expecting a vector of size 2 but it’s getting 1.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-e7497d29958b> in <module>
1 t_A_prev, t_W, t_b = linear_activation_forward_test_case()
2
----> 3 t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "sigmoid")
4 print("With sigmoid: A = " + str(t_A))
5
<ipython-input-23-22d833a4ad11> in linear_activation_forward(A_prev, W, b, activation)
22 # A, activation_cache = ...
23 # YOUR CODE STARTS HERE
---> 24 Z, linear_cache = np.dot(W,A_prev)+b
25 A, activation_cache = sigmoid(Z)
26 # YOUR CODE ENDS HERE
ValueError: not enough values to unpack (expected 2, got 1)