If we feature scale and use polynomial features, won’t linear activation functions perform polynomial regression?
Wouldn’t this invalidate the idea that linear activation functions in multiple neurons is just equivalent to performing linear regression once?
A linear activation function used in many neurons is equivalent to just doing linear regression once - I understand that
Where i am confused is, what if we feature engineer to use ploynomial features? Won’t the linear activation functions perform polynomial regression then (please see image 2 from Course 1 week 2 of the ML Specilaization)
In the lecture we see that linear activation function in multiple layers is as good as using linear regression.
So my question is, if we use polynomial features, x, x^2, x^3 - will having multiple layers with a linear activation function still be as good as linear regression?
As an example, in the NN architecture below, what if we were to use polynomial features as well?
Linear Activation Functions in all layers is pointless - That’s correct
However, if we were to use polynomial features, then what’s the effect of using a linear activation function then? Does this lead to non-linearity since the features are polynomial and we are okay? Or do we still run into the same outcome where we should just use polynomial regression rather than using a neural network all together
This does get close and answers another question I had.
Sorry about not being able to form the question well. I’ll give another try below. Thanks for the patience
I’m seeking further clarification on a specific aspect discussed during the lecture, particularly related to the use of linear activation functions in neural networks.
The lecture highlighted that employing a linear activation function across multiple layers essentially yields the same effect as conducting linear regression. This leads me to ponder about the scenario where polynomial features (such as x, x^2, x^3, etc.) are incorporated into the model. Specifically, my question is:
If we integrate polynomial features within a neural network architecture that employs linear activation functions across its layers, does this approach still equate to performing linear regression? Or does the inclusion of polynomial features introduce a level of non-linearity that makes this configuration more advantageous than mere linear regression?
To illustrate, consider a neural network architecture as follows, but with an added twist of incorporating polynomial features.
The core of my confusion lies in understanding the impact of linear activation functions when used in conjunction with polynomial features:
Is using linear activation functions in all layers still considered unnecessary if we include polynomial features?
Does the use of polynomial features with linear activation functions introduce any non-linearity to the model, thereby justifying the neural network’s architecture over traditional polynomial regression?
I appreciate your insights on this matter, as it’s a point of confusion that I’m eager to resolve.
A: original features + multiple hidden layers with “linear” activation + an output layer
B: original features + an output layer
C: polynomial features + multiple hidden layers with “linear” activation + an output layer
D: polynomial features + an output layer
Here:
A & B are equivalent, and they are both linear regressions of the input features (which are the original features)
C & D are equivalent, and they are both linear regressions of the input features (which are some polynomial features)
In A, B, C, and D, their outputs are linear with respect to their inputs, so they are all linear regressions.
The polynomial features bear some non-linearity with respect to the original features. The features bear the non-linearity, NOT the neural networks.
Therefore, with respect to the original features, C & D carry some non-linearity which is brought NOT by the neural networks BUT by the feature engineering process.
Yes, that neural network is still a linear regression with respect to its input (which is a set of some polynomial features).
First, I think you were implying that, with polynomial features, it is not called linear regression. This is wrong. This is still a linear regression because our model is always only establishing a linear relationship between the inputs and the output, and it does not care whether the inputs are non-linear to something else.
So, as said, both with the polynomial features and with the original features, they are linear regression. All of the A, B, C, and D are linear regression.
However, using some polynomial features DO bring some non-linearity in, as compared to the original feature, but as for whether it is benefitical to do so, it remains to be tested.
@rmwkwok Thank you so much for explaining this. It really clears out my doubts.
Knowing that A&B are equivalent and so are C&D really made me understand that the linear activation function across all layers of a neural network is the same as just performing linear regression.
I see. For some reason i assumed that if we use polynomial features it’s “polynomial regression” and not linear regression.
Understood. Even if the features are polynomial - the model is still linear - because the model is working on establishing a linear relationship between the inputs and the outputs.
I am understanding that what matters here is that f(x) = w * x +b ; and x can be anything but that doesn’t eliminate the fact that the model is performing linear regression
True that. I think that will be more related to the problem one is trying to solve!
@TMosh that hits home and clears a lot of my misunderstandings as well.