I am performing the L_model_forward and have two different methods depending on whether the activation is in the hidden layers or at the output layer. In the hidden layer I am using ReLu and in the output layer I am using a sigmoid activation function. I am calling the different parameters ‘W’ and ‘b’ from the parameters dictionary using the l index from the for loop but in the sigmoid activation I am using L because it is the last layer. Additionally, I use A_prev for my activations in the hidden layers but A for the output layer.
I am receiving an error that my shapes do not match the expected output. You can see I am very close to the expected shape but do not know where it is coming from. I was thinking I am appending incorrectly but from other documentation it looks ok.
It appears the error is stemming from the sigmoid activation part of the loop. Might be a combination of wrong A,W, b, or L values? Please advise:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-96-10fc901e800a> in <module>
1 t_X, t_parameters = L_model_forward_test_case_2hidden()
----> 2 t_AL, t_caches = L_model_forward(t_X, t_parameters)
3
4 print("AL = " + str(t_AL))
5
<ipython-input-95-b5091027b90c> in L_model_forward(X, parameters)
43 parameters['W' + str(L)],
44 parameters['b' + str(L)],
---> 45 activation = "sigmoid")
46 caches = caches.append(AL)
47
<ipython-input-9-5c560d8b0806> in linear_activation_forward(A_prev, W, b, activation)
23 # YOUR CODE STARTS HERE
24
---> 25 Z, linear_cache = linear_forward(A_prev,W,b)
26 A, activation_cache = sigmoid(Z)
27
<ipython-input-7-ff417d082cca> in linear_forward(A, W, b)
18 # Z = ...
19 # YOUR CODE STARTS HERE
---> 20 Z = np.dot(W,A)+b
21
22 # YOUR CODE ENDS HERE
<__array_function__ internals> in dot(*args, **kwargs)
ValueError: shapes (1,3) and (4,4) not aligned: 3 (dim 1) != 4 (dim 0)
Expected output
AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]