Let us say that we scale features in trainning set X (with features x1 and x2) using z-score normalization in order to improve gradient descent in a logistic regression. In this way we get scaled X_s, and after regression we get w_s and b_s, both affected by scaling.

Suposed we want to plot the decision boundary using our results for original features (X). How do we use our resulting parameters w_s and b_s to plot this boundary?

After you get the weights w & b and you get y predicted = w*x + b you want to reconstructed to the original y or you normalize x feature and you want to reconstructed to the original data when you use any function of normalization in any library like sklearn you could reverse to the original by this built in function inverse_transform(…) like this example

Hi Abdelrahmman,
Thank you for the response! I understand, then we can transform X_s back to X using scikit-learn.
Do you know if we can do this inverse transformation to w_s and b_s (transform back to w and b)?

Parameters of linear regression are scaled along with the data. If you scale the features, the parameters would get scaled the opposite way. please check this post it’s very useful post