Train untill convergence of loss?

(Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera)
According to the video you must train untill convergence of loss or parameters value. But, if you do this using the training set you would overfit the model. You should stop when validation loss stops decreasing right?

The method for avoiding over-fitting the training set is regularization. It adds additional cost based on the magnitudes of the weights.

It’s covered in the course.

No, this leads to overfitting the validation set. You don’t use the validation set while training.

Training should stop at the # of epochs where the dashed line lies. At that point training loss continues to decrease, however validation loss begins to increase.

That drawing doesn’t accurately depict the real situation during training.

In practice there is a 3rd set of data (the validation set) that is used to adjust a regularization parameter. The regularization parameter is what controls overfitting - not the number of epochs.

This is from post I read. Link on bottom.

Validation set is a subset of data used to evaluate the model’s performance during training. It is used for Model Evaluation, Hyperparameter Tuning, and Overfitting Detection. We train until the validation set accuracy increases to ensure that the model generalizes well to new, unseen data rather than just memorizing the training data.

By monitoring the validation accuracy, you can detect overfitting. If the training accuracy continues to increase while the validation accuracy plateaus or decreases, it indicates that the model is starting to overfit the training data and perform poorly on unseen data.

“early stopping” is not a method that the MLS course uses. There are better ways to control overfitting.