Practice quiz: Gradient descent in practice Q5

True/False? With polynomial regression, the predicted values f_w,b(x) does not necessarily have to be a straight line (or linear) function of the input feature x.<

This question sound misleading to me.
Only if the feature vectors are projected from linear to non-linear form, the regression will work.
But then the predicted values still follow a straight line with respect to the input feature.
Just that the input feature has been engineered up-front.

Hello @Google_Google,

I can see your point and at first I was also thinking why was the answer that one. Then I have a second look at the question and realize that the x in the f_{wb}(x) can just be a scalar instead of
a vector. If it is just a scalar, then it can just be something like f_{wb}(x) = w_1x^2 +w_2x + b which makes it non-linear with respect to x.

To fit perfectly your way of interpretation into the formula, it has to be f_{wb}(\vec{x}) = w_1x_1 +w_2x_2 + b where \vec{x} = \begin{bmatrix} x_1 & x_2 \end{bmatrix}, x_1 = x^2 and x_2 = x.

I understand that sometimes we might drop the vector sign, but it might really just be that the intention behind the question is about the scalar feature. So there is only one x, one feature, and consequently the function is expressed as a non-linear one with respect to that x.

I hope my answer won’t be discouraging to you. I think above the test itself, it is more important that we understand the concept, and I think you absolutely do because otherwise you wouldn’t have said it was misleading. It could only be misleading because you knew there were two possible interpretations.

Thank you for posting your question :slight_smile:


Sure it’s a matter of definition.
And from the text of the question it cannot be inferred whether by input feature X the original data is meant, or the transformed, engineered feature x^2. That’s all I’m saying.

HI @Google_Google
False. With polynomial regression, the predicted values f_w,b(x) are represented as a polynomial function of the input feature x, which is a straight line function. The polynomial regression model can be used to fit a polynomial equation to the input-output data and make predictions. The degree of the polynomial equation can be increased to increase the flexibility of the model, but the predicted values will still be a straight line function of the input feature x.

It’s possible to project the feature vectors from a linear to a non-linear form before applying polynomial regression, but this is a preprocessing step, not a step in the actual polynomial regression. The goal of this step is to transform the input feature vectors in a way that captures the underlying non-linearity of the problem, but the predicted values will still be a straight line function of the input feature x.

Muhammad John Abbas

Awesome, I get it now haha. I am always getting up early to do the course so sometimes my thinking is a bit slow :smiley:

So the conclusion is that it is possible to fit a polynomial regression model without pre-processing the data. The model is of the form f = w(n)*x^n + w(n-1)*x^n-1 + … + w(1)*x + b
In this case for obvious reasons we can end up with the non-linear case for f.

However it is also possible to pre-process the data (transform to polynomial) and then throw it into the standard linear regression model f= wx+b.

I think the question is an outlier, as it requires to reflect beyond just what was taught in the labs/lectures. I like that, but the other questions kind of conditioned me towards just spitting out what was said/done in the course.

Thanks to both of you for your explanations! :slight_smile:

1 Like