Hello,
Below errors were observed in Exercise 1.
ASSUMPTION:
As “activation” was not pre-declared, so “sigmoid” is used for each case:
Forward propagation (A1 and A2).
Backward propagation (dA1 and dA2).
Please opine if activation variable is correct.
If needed, I could share the code.
Sincerely,
A
**
**
ValueError Traceback (most recent call last)
in
----> 1 parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2, print_cost=False)
2
3 print("Cost after first iteration: " + str(costs[0]))
4
5 two_layer_model_test(two_layer_model)
in two_layer_model(X, Y, layers_dims, learning_rate, num_iterations, print_cost)
47 # linear_activation_forward
48 A1, cache1 = linear_activation_forward(X, W1, b1, activation = “sigmoid”)
—> 49 A2, cache2 = linear_activation_forward(X, W2, b2, activation = “sigmoid”)
50 # YOUR CODE ENDS HERE
51
~/work/release/W4A2/dnn_app_utils_v3.py in linear_activation_forward(A_prev, W, b, activation)
201 if activation == “sigmoid”:
202 # Inputs: “A_prev, W, b”. Outputs: “A, activation_cache”.
→ 203 Z, linear_cache = linear_forward(A_prev, W, b)
204 A, activation_cache = sigmoid(Z)
205
~/work/release/W4A2/dnn_app_utils_v3.py in linear_forward(A, W, b)
176 “”"
177
→ 178 Z = W.dot(A) + b
179
180 assert(Z.shape == (W.shape[0], A.shape[1]))
ValueError: shapes (1,7) and (12288,209) not aligned: 7 (dim 1) != 12288 (dim 0)
Expected output:
cost after iteration 1 must be around 0.69