Should we scale the training outputs for model training as well? Or just the input features?

Hi, I’m doing the ML Specialization course. In the week 2 optional lab assignments, I see that we have not scaled the outputs while training and I’m a bit confused about how the model is still fitting the data and the reasoning around scaling/not scaling the outputs.

So I guess my doubt is, generally should we or should we not scale the training set outputs, and why? How does the model manage to fit the data even with unscaled outputs but scaled inputs?

Scaling of input features is done in some learning algorithms so that algorithm runs faster and all the features lies between similar scale. However, it’s not necessary to scale output as it will not impact convergence of algorithm. Also, scaling output leads to changing the interpretation of the learning model.
Remember to scale the new inputs which are used for prediction.

1 Like

I agree, scaling the outputs is not usually necessary.