Having some trouble on week 2 lab

Week 2 lab

I’m having some trouble with the week 2 lab.

sigmoid
initialize_with_zeros
propogate
optimize
predict

are all working. All tests passed, expected output matched.

In particular, for “optimize” I’m getting this output (this is not the code I wrote, this is the output to the test for the code)

I’m now working on the function “model”. This is supposed to put it all together and not only run gradient descent (which optimize already did) but also do a train/test split and evaluate the predicted values on the training and test data.

As far as I can tell, the main difference in the gradient descent aspect is the number of iterations and the learning rate, which I do not get to choose myself. In both cases, for optimize which is working and for model which is not, gradient descent is run on a one layer neural network with a logistic activation function.

I have model implemented, but I’m puzzled by the error I’m getting. It says that my value for my cost function is wrong. The only place I’m using a cost function in my code is at the output from optimize (and I don’t know of another place I should be using it within the model function?). I don’t know the number of iterations used in the test pictured below of the model function, but the test of optimize used 100 iterations, so the issue is not likely to be that something is wrong with the way I’m accumulating dw or db in the derivative, or with mismatched dimensions in the matrices in the optimization algorithm. It is not surprising, I guess, that the value I obtain by both the optimization and the model algorithms are basically the same-- the .159 value, because the initial state of w and b is the same because they are set to zero, and because it is operating on the same population of X data. However, some difference should be expected, because rather than operating on all of X, it is operating on X_train within the model function, and that makes it a bit surprising that the two results for the cost in the optimize function and the model function are numerically identical to all digits. Presumably it is correct, then, that the cost should be something else in the model function.

I’ve double checked the other issues in the error pictured. going back up to optimize, w seems to be an array before it was put into the dictionary, and I can’t really see how it would be anything other than an array when it is taken back out of the dictionary.

I can see that costs has the correct type, length, and format by the output from the optimize function above. So it is probably a numerical value issue, as the error actually thrown says.

I can’t think of any way I should need to modify the cost function after optimization is done. At that point, what remains should be prediction.

Could the issue be the deep copy of w in the model function? Although a data structure is an object, a deep copy is more meaningful in the context of a data structure containing a data structure, or a custom object. When w is deep copied, does it update w within the dictionary? Or is it necessary to deep copy the dictionary instead?

Thanks,
Steven

Your model code should work for any number of iterations, not just 100. One common mistake in this exercise is to hard-code the number of iterations and learning rate when calling optimize function inside the model function. You should avoid it.

Hi @s-dorsher,

Since your optimize function is working correctly, the issue can be within model or how costs are accumulated.

Make sure you’re not modifying the cost function outside of optimize. Also, keep in mind that different train-test splits and weight initializations can lead to different results.

Hope this helps! Let me know if you need further clarification.

I’m confused.

Specification we are asked to fulfil:

def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):

Within it, I call optimize, as you say. As you say, I did not and do not hard code the number of iterations. The following is what I did.

num_iterations=num_iterations

Is that wrong?

In theory it should assign the parameter num_iterations in the optimize function to the variable num_iterations in the model function but I’m not 100% sure I understand the internal workings of python well enough that I’m certain that is in fact what it did.

If not that, what should I be doing instead to pass the check at the end?

Thank you. I’m not.

I agree, they should. The irony is that the error I’m getting from the assert says that I have the same result when I input X_test into optimize as when the test code was run on optimize before.

[array(0.15900538)]

This number was completely identical, to every digit, to the number I got from the optimize test output before. So unless the optimize test ran the same parameters and data, something is artificially holding the jupyter notebook’s memory fixed.

To be totally clear, whatever is causing it, I’m not getting the desired output, which the assert says is

[array(0.69314718)]

I’m not sure why the desired output is supposed to be so different than the optimize test case, but presumably it relates to different parameters or data used.

Given exact same parameters and exact same data, it is possible I could get the exact same answer in the two functions. But I didn’t input the data or parameters! The test case did! So unless the test case was setup to do that and the assert is wrong, I think there’s something wrong with the jupyter notebooks memory management, and I suggest maybe that’s the deep copy? Maybe it doesn’t belong there, or maybe it’s used incorrectly? It’s in part of the code that I’m not allowed to modify, so I can’t try to fix it.

Here’s my sort of analysis

When you pass w to optimize, deep copy makes a pointer to w that ties the variable associated with the argument to the function to the local variable inside the function via a pointer. Because the local variable is stored in a dict at the end, the argument to the function ultimately points to the resultant dict after one iteration. In other words, a constant.

w is also a local variable inside the model function. Once it is passed to the optimize function, it becomes tied to that pointer, and then gains the value it points to through the dict, which can only be the value optimize calculates the first time it ran.

Because w and b are fixed, the cost function can also only be the same.

I think that’s what’s going on. I’m not sure.

Steven

Yes, this is the correct way to do it when you call optimize function inside the model fucntion. Same for the learning rate.

Make sure you are using initialize_with_zeros function to initialize parameters with zeros. You don’t need to do copy or deep copy of w and b. Just retrieve w and b from dictionary “params”. For example, you can retrieve x from the dictionary “dict” like dict[“x”].

We use optimize function to update the parameters. So, they are not fixed.

I am completely aware that is how it would work if it were working correctly. It is basically solving a differential equation, though with some added complexity. However, it is obviously not working correctly, and therefore they are not updating.

Hello, @s-dorsher,

With the fact that you were getting a previous cost value, I am afraid you might just have been assigning that value to d['costs']. Check for the spelling of the variable names returned by optimize in your model().

d['costs'] takes costs.

Cheers!
Raymond

I’m not. This is built into optimize in the part I’m not allowed to edit. Can I remove it then? Shown below

Yes, without revealing all of my code, I have

w,b = initialize_with_zeros(shape_that_is_consistent_with_matrix_math)
params, grads, cost=optimize(w,b,...)
w=some_dict.get('w')
b=some_dict.get('b')
two calls to predict that don't depend on or return cost

:grimacing:

What’s going wrong here? I haven’t really changed any of this since the beginning of our conversation, but I don’t see anything wrong with it yet either…

Here:

image

Should be costs with a s at the end.

2 Likes

As Raymond mentioned, it should be costs.
params, grads, costs = ...

1 Like

OMG thank you! It is ALWAYS spelling or misreading the question with me… EVERY SINGLE DANG TIME

Thank you thank you thank you!

1 Like

That happens most of the time. As Prof. Andrew said, “[T]he first step is to suk at it. If you’ve succeeded at suking at AI – congratulations, you’re on your way!”

While I know what you’re talking about, and have been totally aware of this phenomenon for over a decade… I’m not sure that step is as helpful as you think…

Well now I’m working on the optional part, and it says the file isn’t in the directory. You might say, colloquially, that I’m scratching my head, but in practice I’m also scratching my lips and sneezing, because who knows why but I’m allergic again. Let’s hope looking beautiful isn’t a job requirement. Anyhow, back to the programming. I’m not sure where this file is, if it’s not in the directory, which it totally is.

image

Umm… what’s the problem with this one?

This is a picture of my cat from when I lived in St Louis Park, a suburb of Minneapolis, with my ex husband Chuck, in 2010ish. It is a cat. He’s mad at both of us. I forget why. I think it’s the only picture we have of him. I’m not in touch with Chuck anymore. I don’t have any other cat pictures. I don’t actually like cats. I’m allergic. Sadly, Cugel is dead. I adored Cugel and took him on walks and fed him human food and brushed him.

1 Like

Haha. Well, in your first post you did mention that it was the same as a previous value, so I think your observation and analysis process did give you a possible lead, though perhaps unfortunately overlooked (?), so I think you are already on the right way. In fact, the reason I want to point this out (here and in this post) is perhaps it might strengthen your impression that, next time, you see a “previous value”, it might connect you to checking spelling.

Good luck!

1 Like

Because it wanted you to put it in the images folder.

Let us know if it can make the prediction correctly!! :raised_hands::raised_hands::raised_hands:

Not so adorable picture of cat

1 Like