I am confused with the correct difference between the MSE Loss function and the metric MSE to evaluate the performance of a model. Given I use a MSE Loss function while fitting my model, what is the exact difference between the sole metric MSE which I can use to evaluate my Training and Validation error after fitting my model? Is the final MSE Loss always the same as the MSE metric? And is the loss the same as an error in this case? If possible would be great to explain with a medical example!

The MSE loss function and the MSE metric both calculate the Mean Squared Error, but are not the same. The MSE loss is used during training to adjust the model’s weights by minimizing the difference between predicted and true values. The MSE metric evaluates how well the model performs on training or validation data after fitting.

Hope it helps! Feel free to ask if you need further assistance.

thank you for your answer! Yes helped very much! So is the assumption correct then, that the value of the “final” MSE loss of the training (last epoch) has the same value as the MSE when I calculate it for the train data after fitting the model?

Yes but not necessarily. The final MSE loss during training does not always match the MSE calculated after on the training data. MSE loss during training might include extra components like regularization (e.g., L2 or weight decay). If no regularization or other things, they should be similar.

The MSE loss functions is one of several loss functions and the MSE metric is one of several metrics.

You can always train your model to minimize a loss function different than the MSE, for example the Huber Loss, or the loss function of a regularized model (L1 of L2 regularization) and use MSE metric to evaluate its performance in the validation and test sets.

Conversely you can evaluate model using the Mean Absolute Error (MAE) metric while training the model with the MSE loss.

The reasons for doing that type of exercise is that some data might be skewed or have another type of problem in which case you would like to use a different loss and evaluation metric during training and after that during evaluation.

A marked difference between the MAE and the MSE metrics on the test set when you train a model with the MSE loss is a sign of a problem in your model.

In this case despite the fact that you trained your model using the MSE loss comparing the results of the MSE and MAE metric on the test set is good way of checking for problems in the model (skewness or other sorts of non normality in the test or training sets).