Why do we still call it linear regression when we add polynomials?

In addition to @TMosh’s excellent reply:

You could also formulate:


y = w_1 \cdot x_1 + w_2 \cdot x_2 + b, which is a linear model y = X \cdot w

  • with your matrix X, consisting of your features
    • x_1 = f_1(x) = x and
    • x_2 = f_2(x) = x^2.
  • your features are well defined and you parametrise your weights by fitting the model
  • In general, f(x) could be any suitable nonlinear function or model for each feature to encode domain knowledge! This strategy would mean to model the nonlinearity of your modelling problem in your features. (Of course f(x) could also just be a linear function as in f_1(x)).

These threads might be interesting for you:

Hope that helps!

Happy learning and best regards