Decision boundary for logistic regression with scaled features

Let us say that we scale features in trainning set X (with features x1 and x2) using z-score normalization in order to improve gradient descent in a logistic regression. In this way we get scaled X_s, and after regression we get w_s and b_s, both affected by scaling.

Suposed we want to plot the decision boundary using our results for original features (X). How do we use our resulting parameters w_s and b_s to plot this boundary?

Hi @Marcos_Santos

Welcome to the community!

After you get the weights w & b and you get y predicted = w*x + b you want to reconstructed to the original y or you normalize x feature and you want to reconstructed to the original data when you use any function of normalization in any library like sklearn you could reverse to the original by this built in function inverse_transform(…) like this example

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaled = scaler.fit_transform(df)
unscaled = scaler.inverse_transform(scaled)

Cheers,
Abdelrahmman

Hi Abdelrahmman,
Thank you for the response! I understand, then we can transform X_s back to X using scikit-learn.
Do you know if we can do this inverse transformation to w_s and b_s (transform back to w and b)?

Thanks,
Marcos

Hi @Marcos_Santos

Parameters of linear regression are scaled along with the data. If you scale the features, the parameters would get scaled the opposite way. please check this post it’s very useful post

Thanks,
Abdelrahman

Hi @AbdElRhaman_Fakhry , the post was exactly what I was looking for, thank you!

Cheers,
Marcos

@Marcos_Santos
I am happy to you to reach it, it’s also useful to me and if you have any other question feel free to ask it

Regards,
Abdelrahman