I have gone through the notebook. All tests passed for optimize and predict functions, but when I pull them all together in the model, I get and assertion error telling me the weight vector differs from what is expected.
HI, @Matt_Gerhold. It would be more helpful if you posted the entire traceback (the error log) from the execution. I suspect a “hard-coding” mistake whereas you do not let the helper functions called within your new function inherit the parameters specified in your new function.
Thanks for the reply. I figured it out: need to ensure the learning rate and iterations were passed to optimize. It was reverting to defaults from the function definition.
I actually made the mistake of passing the same learning rate and number of iterations to the optimize function. You just ended my 5 hours of misery. Thank you!
I also am facing the same issue. If I provide the number of iterations and learning rate as defined in the model function (2000 and 0.5, resp) I get an error in the d[costs] value, and if I provide the earlier-mentioned values (100 and 0.09, resp) I get the error in d[w]. I’m not able to resolve this.
But the point is that you don’t need to supply any explicit values, right? Just pass through the parameter values that were given to model at the top level. That’s the whole point here.
What you are doing is “hard-coding” the values, but the code is supposed to be written to work in every case, not in some particular case, right?
Yes, I understood that. But I had tried not to mention the explicit values as well. Still, I’m getting the same assertion error where [dw] values are wrong
But leaving out the optional parameters is also a form of “hard-coding”: it means you get the default values that are declared in the definition of the optimize function. That was the point that Matt G was making in the first several posts of this thread. You have to pass learning rate and number of iterations, but just pass the variables without explicitly assigning them values.
Also note that the “hard-coding” bugs give the error that your dw array has the wrong size. If it’s complaining about the values of dw as opposed to the size, then maybe you’ve got a different kind of bug. Please show us the actual error output you are getting, if my statement above doesn’t help.
It’s good to hear that it helped. Note that this is a pretty fundamental issue in how function calls work in python. If you are new to python, you might want to google “python named parameters” or “python keyword parameters” and spend some time reading a few tutorials about how the keyword parameters work.
You don’t need to specify the learning rate: it is passed to you as a parameter. You just need to use the parameter value that is passed to you. If you are specifying it anyplace with a decimal value, that is a mistake.
Also note that this thread is more than a year old. It’s likely that Matt is not listening any longer.