I have watched and read a few tutorials on Exploraty data analysis and they recommend analyzing the data to explore patterns with pandas to select better features for the models.
I wonder would a correlation matrix be just enough for EDA especially for linear regression and logistic regression problems.? a relatively high negative and positive correlation would imply a strong relationship between the label and the feature. so why not just use them as a start and then maybe try polinomial features ?
p.s. I am trying to solve the famous toy sets of kaggle these days. Would it be appropriate to send my notebook here and ask to receive feedbacks?
Hi Mehmet! I hope that you are doing well.
Generally, we use a correlation matrix to study the correlation between independent variables (among features) and between independent and dependent variables( i.e. between features and our target variable). We do this because of the following reasons :
Correlation between features and target variable ::: We try to check which feature has a high correlation with the output variable – > This will be an effective feature which will play a main part in predicting our output variable.
Correlation among the features ::: This is done in order to drop highly correlated features. Why drop them? Say for example X1 and X2 are highly correlated then they will have the same kind of effect on the output variable, change in performance observed when [we use (both X1 and X2) and when we use only X1 or X2] tends to be negligible. So at least we can try to reduce the computation cost by dropping anyone of them.
So as you can see, we decide based on both mathematical ideas and some intuition. We can’t simply say that we can conclude our EDA only with a correlation study, for some cases it might be enough and for some, it might not be. It entirely depends on the dataset. So as you said we can start at any one point, evaluate our performance and then look at other methods to improve the quality of our data. So it’s clearly up to you how to proceed. But as said, these(correlation study, etc) are some common things that we do always tend to do so that we have a good start.
Well, this is just my opinion, do share your thoughts too,if you have different things in your mind.
Correlation and scatter matrices help a lot as you mentioned e.g. linear regression. For example you could apply feature engineering techniques to describe the non-linearity already in your features and check and adapt your feature design based on your visualization. Here also a relevant thread for you:
for linear regression and logistic regression, what else can we observe that provides us more insights for feature selection than the corr. matrix?
I am asking this because in most of the tutorials, what I understand is visualization and grouping-sorting only seem to provide a manual analysis to speculate on the existence of correlation between the independent and dependent variables. So I figured, why not just use corr. matrix?
then I thought I could use pol. features to see if it helps to lower the cost
but then @Christian_Simonis’s reponse made me think that maybe we should also consider feature crosses beyond polynomial features. then maybe visualization as in christian’s repo may help to figure out which ones to select?
but then… cant I also do it by the correlation matrix again by randomly adding some feature crosses to the dataframe…?
after all, visualization also depends on our intution on which features to visualize right?
okay… part of this is about why to bother too much with learning pandas and seaborn etc…
and other part is a geniune curiosity.
p.s @Nithin_Skantha_M your point on using corr matrix for an analysis of relationship between independent variables is so great. I havent thought about it. It seems so good.
ps2 @Christian_Simonis I will check your repo more in depth. It seems very informative! thank you for sharing
What I mean with it: correlation just captures the linear dependency between features, but you can also have highly structured patterns that are not indicated by just looking at the Pearson correlation coefficient (e.g. when they are highly non-linear):
I would suggest iteratively enhancing features (while checking in the visualisations) makes sense by systematically bringing more domain knowledge into your features. Feature crosses are one way but you are not limited to this. Feel free to use any mathematical operations resp. physical or other domain models that just make sense to derive good features. My experience is that signal processing techniques (that you learn in Mechatronics or System theory) are especially helpful, e.g.: for example a Fourier transform can come in handy when you work with oscillating systems and want to use your model for predictive analytics of a steady state system.
here in the following picture, I understand that the decision boundery is circlear(something like x1^2 +x^2 right? ). So visualization helps. In this visualization, you intentionally chose two features to visualize the non-linearity. but what exactly led you to choose to visualize those features within the pool of m many features? moreover, if the correct model were sth like x1^4+x1^2Xx3^4 would we be able to visualize it? wouldnt we be forced to find a non-visual method for feature engineering?
wouldnt a better approach be to add polynomials of the features(that already seems important) to the dataframe and see if the new (linearized) features have correlation ? I just presume 3 degree would be enough to avoid overfitting.
Maybe that also wouldnt solve the problem as it doesnt adress selecting possible feature multiplication options…
as per signal processing techniques for ml applications, I have no idea what they are but I will google it
p.s I will read through the thread to gain more insights into the problem with a fresh mind tommorrow
When analysing the residuum you do not want to see any systematic pattern. If the model did a bad job you could see at least some pattern in the residual data and no random “white noise”.
This can absolutely help in data understanding.
Why do you think it would be “better” to add new features?
I think the transformation w/o growing the dimensional space is already making the data linearly separable in a minimum dimensional space. To me it seemed quite elegant this way. But many approaches solve the issue.
In general: The most suitable approach depends on the data as well your business problem you are solving. Often in reality it is sufficient to find a solution which is just “good enough”.
I proposed this approach to explore the existence of a possible non-linear relatiobnship.so maybe x doesnt correlate with the target value but maybe x^2 does. But then I thought, how can we possible explore a possible non-linearity such as x1XxX2 with my method ? I donno.
for really big feature datasets, I just suspect visualization wouldn’t tell us much about selecting the features. so, I speculate, finding a more generalized mathematical formula for feature selection is needed.
I also didnt get what you mean by transformation but I asked it in the thread that you referred since you already have one explanatory post in it. I mean I get the reason-it saves you from non-linearity by feature engineering. but what is the method for transforming?
p.s maybe another safer method could be lasso regularization to butcher all the irrelevan features from the model ?