In the lectures, we typically worked with datasets containing a single feature. However, in real-world applications, datasets usually have multiple features. How can we decide which features to transform into polynomials to improve model performance? Should we apply the polynomial transformation to all features, or is there a way to identify the ones that will contribute most effectively?
Yes, up to a point.
But if you have a large number of features, it’s more effective to use a neural network, rather than engineer the features yourself.
A neural network includes a non-linear function which automatically has the same effect as manually creating your own polynomials.
Thanks, Tom.
I appreciate that you replied so quickly
@Hritik007 just to add to what @TMosh says (I would say I agree)-- But doing PCA will give you a general sense of the contribution of different variables, but also, yes-- a NN is fine; But you kind of need a lot of data to really get it to take off.
If your study is information poor more traditional ML methods might work better (SVM, KNN, etc).