Zscore normalization

Good day everyone. In the ML specialization course 1, Mr Andrew talked about the normalization of the feature variables for linear regression. But my target variables has a large spread of values which in turn slows gradient descent and results in large values.
Can i also normalize my target variables? Will it affect my predictions? And how do i go about it PLEASE??

Yeah its still a good idea to normalize the inputs because it helps the model converge faster and easier.

For the outputs you can use a final TF lambda layer to just multiply the output by a factor around the magnitude of the output so the predictions can be predicted close to the actual values of the labels.

Also you could normalize target variables as well, i guess thats easier than my first answer.

Hello @Abdulraqib_Omotosho,

Your targets shouldn’t slow down the gradient descent, because the gradient will scale with the errors and consequently with your targets. There is no need to normalize the target. I am making this comment because if you think that your training is unreasonably slow, then I think the problem should be in elsewhere. For example, the learning rate could be too low; or maybe it is not too slow but reasonable - how do you know it is TOO SLOW.

Having said there is no need to normalize the target, you always can do it. You just normalize it like you normalize a feature, but remember to keep the normalization constants so that you can use them to “de-normalize” the model’s prediction. For example, if you normalize by (y - y_mean)/y_std, then you will need to do y_pred * y_std + y_mean to de-normalize.

Raymond

1 Like

Ok thanks @rmwkwok . I’d try it out