first, we train our ANN by using normalized input data and relative output data, Once the training has done it can also predict output for unknows input numbers…am I right?

how we can provide Denormalization of “Z-score” in python. ??

We normalize the training input features, and we do not normalize the training output, and then we train the model. To do prediction on some new samples, we normalize the features of those samples in the same way we normalize the training data, then make prediction on those normalized samples, and get the predicted outputs. There is no “denormalization” needed for those predicted outputs.

By the way, @Muhammad_Azhar_Ghaur , if you have a dataset X, and you normalize it by X_norm = (X - mean) / std, then you only need X_norm * std + mean to get X back.

Even though we have normalized or scaled the input features, the Learning algorithm can still leverage the weights and bias to bring it to the order of magnitude or level of the target variable, so as to minimize the error between predicted and target value.

Backpropagation algorithm applies to every single layer of the NN, and not just to the output layer.

To update the weights and bias at the ouptut layer, we first find \frac {dj} {dw} and \frac {dj} {db} at the output layer. But instead of stopping there and updating the weights and bias only at the output layer, we next find \frac {dj} {dw}, \frac {dj} {db} at the layer previous to the output layer, and then at the layer previous to that, and so on, all the way back to the first layer.

In this way, we are able to update the weights and bias at every layer, for every round of the parameter update cycle.