W2_A2_TypeError_Model_Test(Target)

Can somebody help me? All the other tests are passed, final model implementation error.


TypeError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
123 y_test = np.array([0, 1, 0])
124
→ 125 d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=0.01)
126
127 assert type(d[‘costs’]) == list, f"Wrong type for d[‘costs’]. {type(d[‘costs’])} != list"

in model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_cost)
34
35 # YOUR CODE STARTS HERE
—> 36 w,b=initialize_with_zeros(train_set_x[0])
37 #print(‘w :’,w.shape)
38 #print(‘b :’,b.shape)

in initialize_with_zeros(dim)
18 # YOUR CODE STARTS HERE
19
—> 20 w=np.zeros([dim,1])
21 b=0.0
22 # YOUR CODE ENDS HERE

TypeError: only integer scalar arrays can be converted to a scalar index

There are two problems with the way you are calling initialize_with_zeros from model:

  1. You are referencing a global variable train_set_x, instead of the actual local parameter that was passed to model.
  2. You need to pass the size of the first dimension of the input X training set, but you are actually passing the contents of the first dimension.
2 Likes

Hi there,

Please check the way you are writing your code for w. The type error also denotes the same.

Thank you both for your kind replies, but I am not getting it.

The error:


ValueError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
123 y_test = np.array([0, 1, 0])
124
→ 125 d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=0.01)
126
127 assert type(d[‘costs’]) == list, f"Wrong type for d[‘costs’]. {type(d[‘costs’])} != list"

in model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_cost)
37 print(‘w :’,w.shape)
38 #print(‘b :’,b.shape)
—> 39 params, grads, costs =optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)
40 #params = {“w”: w}
41 #params = {“b”: b}

in optimize(w, b, X, Y, num_iterations, learning_rate, print_cost)
35 # grads, cost = …
36 # YOUR CODE STARTS HERE
—> 37 grads,cost=propagate(w, b, X, Y)
38
39 # YOUR CODE ENDS HERE

in propagate(w, b, X, Y)
30 # cost = …
31 # YOUR CODE STARTS HERE
—> 32 A=sigmoid(np.dot(w.T,X)+b)
33 #print(‘A :’ , A.shape)
34 cost=-1/m*(np.dot(Y,np.log(A).T)+np.dot((1-Y),np.log(1-A).T))

<array_function internals> in dot(*args, **kwargs)

ValueError: shapes (1,4) and (2,3) not aligned: 4 (dim 1) != 2 (dim 0)

What I have done:

w,b=initialize_with_zeros(X_train.shape[0]) #passed the shape
w : (4, 1) #dimension of w
Y_prediction_test =predict(w,b,Y_test)
Y_prediction_train =predict(w,b,Y_train)

I am not understanding it.

Thanks again.

Hi @Usama_Ali2 ,

A couple of observations are noted from the error output you posted:

  1. When calling optimize(), num_iterations and learning_rate are hard coded values. The unit test set for these parameters are 50 and 0.01 respectively. When making function call, parameter passing is the best way to ensure you function works for whatever value given.
  2. There is a problem with w and X. To get some idea of what is going on, put a print statement at the start of propagate() and find out what values of w and x are passed in to the function. Be sure to refresh the kernel and rerun the code to make sure the execution environment is clean.
1 Like

Hi Kic,

Thanks for you reply,

the dimensions are:

w : (2, 1) <— Before propogate
X : (2, 3)

And the output of propagate is:

I can’t understand, why there are 2 shapes of dw and X? Can you please see?
X : (2, 3)
dw : (2, 1)
dw = [[ 0.25071532]
[-0.06604096]]
db = -0.1250040450043965
cost = 0.15900537707692405
X : (3, 4)
dw : (3, 1)
All tests passed!

Then finally in model:

w : (4, 1)
X_train : (4, 7)
X_test : (4, 3)
Y_train : (1, 7)
Y_test : (3,)
X : (2, 3)

Please guide me.

Hello Usama,

Be careful with the inner dimensions, you are using. The inner dimensions should always match.

Hi @Usama_Ali2 ,

There are two tests for the propagate() function, from the cell just below the progpagate() definition, you can see these two lines of code:
grads, cost = propagate(w, b, X, Y)
propagate_test(propagate)
and for testing the model():
model_test(model)

Both the propagate_test() and model_test() can be found from the module public_tests.py. You can open the file by clicking the file tab at the top of the menu bar to open the file subdirectory, and you will find the file public_tests.py there. This file, public_tests.py, is imported to the namespace of the lab assignment currently running. So the functions in the public_tests module will be included into the current namespace.

Hi,

shapes (1,4) and (2,3) not aligned…

As in the code:
A : (1, 3)
X: (2, 3)

A-X → (ypred-y)
I just don’t know why A becomes (1,4)

Hi @Usama_Ali2 ,

These are the data used for testing model():

Use 3 samples for training

b, Y, X = 1.5, np.array([1, 0, 0, 1, 0, 0, 1]).reshape(1, 7), np.random.randn(4, 7),

# Use 6 samples for testing
x_test = np.random.randn(4, 3)
y_test = np.array([0, 1, 0])

So how did you manage to get (1,4) and (2,3)

Hi,

I see
dim = 3 --initialize_with_zeros_test_1
dim = 4 --initialize_with_zeros_test_2

w : (4, 1) w.T gives (1,4) – in sigmoid function
X : (2, 3)

:frowning:

Hi @Usama_Ali2 ,

The dimension for w (4,1) is correct, but how did X get the shape of (2,3). As you can see from the data for model test, X contains values randomly generated with the dimensions of (4,7). Are you passing the correct parameters to optimize()? You should only use the input arguments for model() to pass to optimize().

1 Like

Yes, we can see the bug in the most recent exception trace: you are passing X and Y to optimize, but those variables are not defined in the local scope of the function model. So you pick up global values that have nothing to do with the actual input data that was passed in here.

1 Like

Thank you so much. The problem was the parameters that I was passing in the optimize and in Y_prediction_test, in the model function. I am really no expert with functions and that is why it sometimes is just bad with me.

I have a question:

cost=-1/m*(np.dot(Y,np.log(A).T)+np.dot((1-Y),np.log(1-A).T)) ← the cost function
it is working but I am missing the :
cost=-1/m*np.sum((np.dot(Y,np.log(A).T)+np.dot((1-Y),np.log(1-A).T))) ← gives error with this.

Current Error:

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

The models results:

train accuracy: 68.42105263157895 test accuracy: 34.0

Still not getting the plot of costs.

Thank you very much. That was the issue, indeed.

I think those results for accuracy is what you get if your predict values are all 0. And notice that the error has nothing to do with the costs: it is that the training produces the wrong w values.

Are you still setting print_cost = False in the call to optimize? Not that that would affect the w value, of course.

You say that you have a problem with functions in general. They are absolutely central to everything we do here, so you might want to consider spending a bit of time learning more about them by either taking a python course first or at least spending some time on some online python tutorials.

Thanks for the advice, I am doing that too. Any material to read, that you might refer for python functions?

It’s a little hard to know what to recommend, since it depends a lot on what other background you already have. If this is literally your first experience with any kind of programming, then I’d recommend you actually take a formal class that teaches intro to python. There are a number of those on Coursera and other places. If you already know other languages like JavaScript or C# or C++ and have some experience actually using them, then just googling “python function call” and “python scope model” and reading a couple of the pages you find would be a good place to start.