C1W2(ex.8) Type Error?

In trying to run the model in Ex. 8, I am getting the following error:

assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"

But when I ask the model function to print the types of w and d[‘w’] I get

<class ‘numpy.ndarray’>
<class ‘numpy.ndarray’>

So what might I be missing here?

Are you sure you get the same type in all the test cases? Notice that there are several different test cases there. I agree that based on what you show I would not expect that particular assertion to fail. So now we have something we need to explain.

It might also help if you could post the entire output that you are getting when you run that test.

Hi Paul,

Here is the full error msg:

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

I’m pretty sure everything is fine through the first 7 exercises, as all tests were passed.

I suspect that the problem still has to do with the keyword arguments for iterations and learning rate. I’ve now (over the course of the last few months) learned enough about Python to know about the use of keyword arguments. What I have not been able to determine, even after looking in my assortment of books on Python, is whether or not a default value needs to be provided in the original definition of the function. I seem to be getting errors regardless of whether I use defaults: num_iterations=2000, learning_rate=0.009, print_cost=False or not: num_iterations, learning_rate, print_cost=False

A parameter to a function is either “positional” or “keyword”. If they are positional, then they are required in every invocation of the function and no default values are provided in the definition of the function. If they are “keyword”, then they are optional and they must have a default value declared in the definition of the function. In fact, that’s how you can recognize the difference: the keyword parameters have the default value in the function definition, right?

But notice that when you call a function, as opposed to define a function, you do not want to provide default values: you either pass through the value you want (no equal sign) or you omit the parameter altogether to get the default value from the function definition.

In your code here in the model function, when you call optimize, you must provide all the parameters, but not give a default value in the call, right? The reason is that the test case may or may not pass each of the parameters to model. Whatever they pass to model, you want to pass through to optimize.

Once you understand the syntax, then you also need to do the thinking to understand the meaning of the syntax when you actually apply it. So both syntax and semantics are critical.

Also notice that the error you are actually showing now has nothing to do with the error you showed in the original post on this thread. So you must have fixed or changed some things between then and now.

OK. I realize I am being extremely dense here.

The keyword parameters only show up in a few places.

In Exercise 6, when optimize is initially defined, we have num_iterations=100, learning_rate=0.009.

In the next cell, we have params, grads, costs = optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)

These parameters are not used in Exercise 7 (predict)

In Exercise 8, the model function is defined with num_iterations=2000, learning_rate=0.5

Then, within the code I added, optimize is called with num_iterations, learning_rate.

Based on my putting in a print(num_iterations) right after initializing b, the value that got passed was 50, which is a value I never assigned anywhere. So it seems like I am neither using defaults or passing through a hardcoded value.

Did you look at the test case function that is actually calling model? The code is all there for you to read, but some of the test functions are in a separate file called public_tests.py. You can open it by clicking “File → Open” from the notebook, which gives you a “File Explorer” view.

Yes, they only show up in the definitions of the functions. We should not be using them anywhere else for our purposes. The code we are writing needs to be general purpose code: that’s the point.

I repeat: “Defining a function is a completely different thing than calling a function.”

If I simply delete num_iterations and learning_rate from the call, I still get an error.

Additionally, that cell in Exercise 6 after the definition of optimize was not written by me and params, grads, costs = optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False) looks like they are being used after the definition.

But the point is that is the top level of the call stack and the purpose of that cell is to test optimize, so it makes sense to pass the values.

But when you call optimize from model, you are not the top of the call stack, right? So you need to obey what is being passed to you from above. Think about what it means if you don’t pass learning rate when you call optimize from model: what learning rate will actually be used? The default that was declared in the definition of optimize. Does that match what the test case is passing?

So what is the value of the learning rate locally within the scope of the model function? It is the variable learning_rate, right? A value for that is defined either by what the caller of model passed or by the default from the declaration of model. So then when we call optimize, we want to pass that value. There are two equivalent ways to do that:

<return values> = optimize(... , learning_rate = learning_rate, ...)

Or simply

<return values> = optimize(... , learning_rate, ...)

Notice that I’m trying hard here not to violate the rules by actually writing out the full solution code for you. I’m using “…” as placeholders for the other code you need to write there.

Of course the same thinking applies to the other keyword arguments to optimize: number of iterations and the print flag.

I do think I understand what you are getting at in terms of local and global (or last used vs. default)

For the definition of the model function, because the default value assigned to num_iterations is 2000 and the default value assigned to learning_rate is 0.5, those are the values that will be used by default by the call to the optimize function were I to write optimize(w, b, X, Y).

And if I assign values to those two parameters in the call to optimize, I am hardcoding and, as you say, that will not end well.

So that leaves simply calling optimize by naming all the parameters without assigning values to num_iterations and learning_rate ; for the test this will pass the values as specified in public_tests.py.

So when I use those same values in the call to optimize within my cell that defines model, why would model_test(model) throw the same error I have been getting pretty much all along:

AssertionError Traceback (most recent call last)
1 from public_tests import *
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[ 0.00156082]
[ 0.00229691]] != [[ 0.08639757]
[ 0.12866053]]

I really do apologize for my inability to grasp this. I know it must be very frustrating for you in your role as a helpful tutor (having taught and tutored math).

No, that is not correct. If you call optimize that way from model, the default values used are NOT the ones declared in the definition of model. They are the ones declared in the definition of optimize, which are 100 and 0.009. You can experimentally prove this to yourself. Print the values down in optimize and watch what happens.

But the point is we don’t want to be using any default values here: we want to be using the values that were passed into model by the test case. Whatever those are. I demonstrated the syntax for how to do that.

Also note that using X and Y as the arguments to optimize in that instance will not end well. Those are not the local variables that contain the training data, right? They are X_train and Y_train.

Hi Paul,

I am uploading the notebook. It is not identical to what I submitted yesterday to get my 83% (all tests through Exercise 7 passed, errors on Exercise 8), because I was mucking around some more based on our correspondence today. Thanks for your help.


{moderator edit - solution code removed}

There are a number of problems with this code:

{moderator edit - solution code removed}

For starters, why not call initialize_with_zeros instead of rewriting the code here? What’s the point of writing the function if you don’t call it? But that code is just more work than you need to do, not actually incorrect.

But then you hard-code the number of iterations and the learning rate when you call optimize. I literally showed you how to do that without hard-coding in my earlier post on this thread.

Then you proceed to duplicate the “update parameters” logic from optimize. Why do that here? The whole point is that you already did that in optimize, right?


OK, I fixed the initialize with zeros, and commented out the lines for the derivatives and updated weights. I no longer am passing hardcoded values to optimize. Now I am getting more errors.

{moderator edit - solution code removed}

This code is still wrong:

{moderator edit - solution code removed}

Look at the definition of initialize_with_zeros. What does it return? Print the type of w and you’ll find it’s a tuple, which is why it throws that error. It’s supposed to be a numpy array, right?

Once you fix that, you’ll find that this code is also wrong:

{moderator edit - solution code removed}

What are the values of w and b after you call optimize? You don’t modify them, right? So they will still be zero.

Finally, success. Took care of the initialization and then read the instructions more carefully and realized that all of the hard work was done in optimize and I didn’t need to recalculate w and b, just retrieve them.

So all tests were finally passed.

Unfortunately, my cat was predicted to be a non-cat. I don’t know how to break the news to her.

Thanks again for all your help.

That’s great to hear that your solution is complete!

Portia is a very beautiful cat and as unambiguously catlike as one could imagine, but our model here is not really that generalizable. Logistic Regression is not as powerful as the Neural Nets we will learn about next, but even there we still won’t get a model that works very well on arbitrary images. Our training set (209 images) is ludicrously small for a task this complex.

Given the smallness of the training set, it’s actually kind of surprising it works as well as it does. Remind me of this point when you get to the end of Week 4 and I’ll point you to some further discussion on the performance of the models here.