Hi there
you can see it by evaluating the performance metrics of your model after training. Let’s assume a regression model would solve your business problem in case a certain KPI is fulfilled (e.g. Temperatur error < 2 K for all predictions)
After the training you can evaluate your model performance, e.g.:
- on all training data
- all all test data
and analyse the residua. You can then compare the residua against your defined success criteria. (2K)
(note: while you can evaluate your training data multiple time, you are only allowed to use the test data for a final test. So we need to make sure no information from the test set makes it into our training or hyperparameter tuning process. This is why we often keep a third set, the validation data)
You might also want to take a look at performance metrics during training of course and compare the performance. Here you see an example for overfitting indicated that test loss is much higher than training loss, meaning the model does not generalise well:
This thread might be interesting for you:
Best
Christian