D-normalization of Data

Hi everyone! I have two Questions actually

  1. first, we train our ANN by using normalized input data and relative output data, Once the training has done it can also predict output for unknows input numbers…am I right?
  2. how we can provide Denormalization of “Z-score” in python. ??

Hello @Muhammad_Azhar_Ghaur,

We normalize the training input features, and we do not normalize the training output, and then we train the model. To do prediction on some new samples, we normalize the features of those samples in the same way we normalize the training data, then make prediction on those normalized samples, and get the predicted outputs. There is no “denormalization” needed for those predicted outputs.

Raymond

1 Like

By the way, @Muhammad_Azhar_Ghaur , if you have a dataset X, and you normalize it by X_norm = (X - mean) / std, then you only need X_norm * std + mean to get X back.

That’s great and really helpful. please clear my one more doubt:
it is necessary to normalize target data during training process?

thanks!

Hello @Muhammad_Azhar_Ghaur,

No, it is not necessary, and it is not needed at all.

Raymond

1 Like

Hello @Muhammad_Azhar_Ghaur

Even though we have normalized or scaled the input features, the Learning algorithm can still leverage the weights and bias to bring it to the order of magnitude or level of the target variable, so as to minimize the error between predicted and target value.

1 Like

Thanks a lot… I have now clear with my all confusions

Thank you so much dear…I have one more question.

  1. we use backpropagation algorithm only for output layer…?

Thanks!

Backpropagation algorithm applies to every single layer of the NN, and not just to the output layer.

To update the weights and bias at the ouptut layer, we first find \frac {dj} {dw} and \frac {dj} {db} at the output layer. But instead of stopping there and updating the weights and bias only at the output layer, we next find \frac {dj} {dw}, \frac {dj} {db} at the layer previous to the output layer, and then at the layer previous to that, and so on, all the way back to the first layer.

In this way, we are able to update the weights and bias at every layer, for every round of the parameter update cycle.

alright! thankyou so dear…I’ll come back to you, if I have further queries