Train error is useless

Hi @empheart

It depends on what you call “useless”. It definitely indicates you many things , one for example, you can measure how much your model overfits (comparing it with cross validation error, if you had not had training loss - how would you know). The rate it goes down also is informative. You can compare it with pure chance probability - how much are you improving. And there are many other things.

But if you are purely talking about how much value will your model bring to the end product, then training error is not that important (test error is the one you would be looking at).

I guess what I’m saying is that train error is important (a lot) in your “1.” point.

3 Likes