W2_A2_Wrong values for d[w]

I have gone through the notebook. All tests passed for optimize and predict functions, but when I pull them all together in the model, I get and assertion error telling me the weight vector differs from what is expected.

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

1 Like

HI, @Matt_Gerhold. It would be more helpful if you posted the entire traceback (the error log) from the execution. I suspect a “hard-coding” mistake whereas you do not let the helper functions called within your new function inherit the parameters specified in your new function.

Let’s see the traceback! :nerd_face:

2 Likes

Thanks for the reply. I figured it out: need to ensure the learning rate and iterations were passed to optimize. It was reverting to defaults from the function definition.

3 Likes

Well done Matt. Onward! :+1:

1 Like

Thank you. That solved my problem too.

1 Like

I actually made the mistake of passing the same learning rate and number of iterations to the optimize function. You just ended my 5 hours of misery. Thank you!

1 Like

I also am facing the same issue. If I provide the number of iterations and learning rate as defined in the model function (2000 and 0.5, resp) I get an error in the d[costs] value, and if I provide the earlier-mentioned values (100 and 0.09, resp) I get the error in d[w]. I’m not able to resolve this.

In the previous message, I meant 0.009 as the learning rate and not 0.09

But the point is that you don’t need to supply any explicit values, right? Just pass through the parameter values that were given to model at the top level. That’s the whole point here.

What you are doing is “hard-coding” the values, but the code is supposed to be written to work in every case, not in some particular case, right?

Yes, I understood that. But I had tried not to mention the explicit values as well. Still, I’m getting the same assertion error where [dw] values are wrong

1 Like

But leaving out the optional parameters is also a form of “hard-coding”: it means you get the default values that are declared in the definition of the optimize function. That was the point that Matt G was making in the first several posts of this thread. You have to pass learning rate and number of iterations, but just pass the variables without explicitly assigning them values.

Also note that the “hard-coding” bugs give the error that your dw array has the wrong size. If it’s complaining about the values of dw as opposed to the size, then maybe you’ve got a different kind of bug. Please show us the actual error output you are getting, if my statement above doesn’t help.

Understood. Thank you very much!

It’s good to hear that it helped. Note that this is a pretty fundamental issue in how function calls work in python. If you are new to python, you might want to google “python named parameters” or “python keyword parameters” and spend some time reading a few tutorials about how the keyword parameters work.

Sure, will do that. Thanks!

1 Like

Thanks, It worked and I was messing with my code from yesterday.

1 Like

Hi Matt, I also have the same error.

Can you suggest the value of the learning rate in decimals?

1 Like

You don’t need to specify the learning rate: it is passed to you as a parameter. You just need to use the parameter value that is passed to you. If you are specifying it anyplace with a decimal value, that is a mistake.

Also note that this thread is more than a year old. It’s likely that Matt is not listening any longer.

There is an error.


AssertionError Traceback (most recent call last)
in
1 from public_tests import *
2
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
131 assert type(d[‘w’]) == np.ndarray, f"Wrong type for d[‘w’]. {type(d[‘w’])} != np.ndarray"
132 assert d[‘w’].shape == (X.shape[0], 1), f"Wrong shape for d[‘w’]. {d[‘w’].shape} != {(X.shape[0], 1)}"
→ 133 assert np.allclose(d[‘w’], expected_output[‘w’]), f"Wrong values for d[‘w’]. {d[‘w’]} != {expected_output[‘w’]}"
134
135 assert np.allclose(d[‘b’], expected_output[‘b’]), f"Wrong values for d[‘b’]. {d[‘b’]} != {expected_output[‘b’]}"

AssertionError: Wrong values for d[‘w’]. [[ 0.14449502]
[-0.1429235 ]
[-0.19867517]
[ 0.21265053]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]

Sir, can you please explain which parameters are passed to me?

1 Like

You have to pass num_iterations as num_iterations and learning_rate as learning_rate.

1 Like

Thank you. It is working.