# Week 2 - Exercise 8

My code passed all tests in exercises 1 -8, including “model_test(model)” but after that the running the cell:
logistic_regression_model = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=2000, learning_rate=0.005, print_cost=True)

shows a message:

ValueError Traceback (most recent call last)
in
----> 1 logistic_regression_model = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=2000, learning_rate=0.005, print_cost=True)

in model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_cost)
38 w, b = initialize_with_zeros(X_train.shape[0])
39
—> 40 params, grads, costs = optimize(w, b, X_train, Y_test, num_iterations, learning_rate, print_cost=False)
41
42 w = params[“w”]

in optimize(w, b, X, Y, num_iterations, learning_rate, print_cost)
36 # YOUR CODE STARTS HERE
37
—> 38 grads, cost = propagate(w, b, X, Y)
39
40 # YOUR CODE ENDS HERE

in propagate(w, b, X, Y)
31
32 A = 1/(1+np.exp(-(np.dot(w.T,X)+b)))
—> 33 cost = - np.sum(Y * np.log(A) + (1-Y) * np.log(1-A)) / m
34
35 # YOUR CODE ENDS HERE

ValueError: operands could not be broadcast together with shapes (1,50) (1,209)

Hi @jawart I think in the cost function calculation needs to be reviewed. Particular attention to the matrix transposition in order to have a dimensionally correct matrix products. Hope this helps

Hi,
is it possible if it is the same cost function formula, which has given correct output in excercise 5 and passed tests there?

Hi again, I have found the reason.
It was “_test” instead of Y_train as function argument.

2 Likes

Right, you can see the bug in the code you showed in your original post here. Also note that there’s another bug in that call to optimize from model: you will never print the cost values, even if they ask for them.

In Exercise 8:

1. where are we reading the w, b, X_test/train and Y_test/train data from?
2. params, grads, costs = optimize(w, b, X, Y, num_iterations, learning_rate, print_cost=True) - do the input parameters (w, b, X, Y) change for train and test cases?
3. Y_prediction_train = predict(w, b, X_train) - similarly, which w and b are inputs to the function Y_prediction_train
4. Is the solution only in 4 lines of code (as shown in the comments)?
Thank you!
1. When you are implementing the model function, the X_train, Y_train, X_test and Y_test values are simply passed to you as parameters. Just use the values as passed. If you want to know where they came from, they are either synthetic values created for the purposes of testing your code or they are the real image data. See the early sections in the notebook for how they are read in and prepared for use by the model function. The w and b values are initialized to
zeros by calling the initialization routine that you wrote earlier.
2. When you call optimize from model, you just pass in the current values of all the parameters. It’s a mistake to “hard-code” the value of print_cost as you show. What if the test case passes “False”?
3. The w and b you use for the predictions are the trained values that you get back from calling optimize, right?
4. Those “number of lines” are just suggestions, so don’t stress if your code takes more lines. The thing to stress about is when the test cases fail.

Really appreciate your explanations! I initialize w and b, and then call the optimize twice (using (X_train, Y_train), and (X_test, Y_test)). I do this because I then call the predict function to get the predicted y using ‘Y_prediction_test = predict(w, b, X_test)’.

## I encounter the following error:

ValueError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
113 y_test = np.array([0, 1, 0])
114
→ 115 d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=0.01)
116
117 assert type(d[‘costs’]) == list, f"Wrong type for d[‘costs’]. {type(d[‘costs’])} != list"

in model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_cost)
35 # YOUR CODE STARTS HERE
36 w, b = initialize_with_zeros(dim)
—> 37 params, grads, costs = optimize(w, b, X_test, Y_test, num_iterations, learning_rate, print_cost)
38 w = params[“w”]
39 b = params[“b”]

in optimize(w, b, X, Y, num_iterations, learning_rate, print_cost)
35 # grads, cost = …
36 # YOUR CODE STARTS HERE
—> 37 grads, cost = propagate(w, b, X, Y)
38
39 # YOUR CODE ENDS HERE

in propagate(w, b, X, Y)
29 # cost = …
30 # YOUR CODE STARTS HERE
—> 31 A = sigmoid(np.dot(w.T,X)+b)
32
33 cost= -1*(np.dot(Y, np.log(A.T)) + np.dot((1-Y), np.log(1-A.T)))/m

<array_function internals> in dot(*args, **kwargs)

ValueError: shapes (1,2) and (4,3) not aligned: 2 (dim 1) != 4 (dim 0)

Could I get some suggestions on how to troubleshoot it? Thank you

It is a mistake to call optimize twice. You don’t need to call it with X_test. The point is that you train on the training data (X_train and Y_train). Then you compute the predictions using the trained model (w and b) on both X_train and X_test to see how the model performs.