@rmwkwok Dear Raymond,
Yes, you are correct, trained all at the same time as one model.
Like I said, your idea was interesting for me to think about, but not what I was talking/asking about.
For one thing, I feel it is super important to be clear when I say ’ β’ (as in ‘Beta’) I am totally not talking about the ‘b’ as the ‘intercept term’ (i.e. as in Y = mX + b), I am talking about the regression coefficent.
My R skills, at present, are probably stronger than similar methods in SKLearn, so for R I’d write:
lm(Y ~ house_price + house_color, data = X)
Assuming of course I had a data set X with these two columns.
I mean, possibly I could just try and toss at it:
lm(Y ~ ., data = X)
And, actually I mean that would also (programatically) probably work;
Yet my point is, where I am still learning.
Even in the article previously linked I know they express only two ‘beta terms’-- But in a traditional regression model you could have, say, 12, or even hundreds of those.
Thanks everyone for the help, but if I am so totally off the mark please explain to me just this:
How does y = wx + b possibly equal y = (w_1 x_1) + (w_2x_2) + b
?
I mean there is an addition sign between the multiply transactions…