C3_W2_Collaborative_RecSys_Assignment - training model w/ tensorflow

Reading the code in section 5, I tried to have a better understanding of the technical solution adopted to minimize the cost function by tf.
I realized that actually using a for-loop with a number of iterations greater than 200 (i.e. the magic number used in the lab) it’s possible to reach a slightly better solution.
What should be the more correct stop condition to apply to the training loop? Could I check, for instance, the gradient module?

Hello @Alessandro_Simonetti,

Let me just focus on the word “better” in

I think the key is how you define “better”, if you have a set of hold-out data which you can use to evaluate how well your model perform, then you may use the tensorflow’s EarlyStopping to monitor the performance of the model on that hold-out dataset such that the training will stop as soon as the EarlyStopping’s criteria is reached.


Thanks Raymond for your kind reply.
For “better” I meant a solution that minimizes the loss.

This functionality EarlyStopping of Tensorflow could be used also in the lab’s case even if there is no usage of the fit method?


Hello Alessandro,

You are welcome :wink:

First, just a reminder that we have discussed the ideas of having a training set and a cv set in C2 W3 videos for Bias and Variance.

Then, back to your question. If “loss” is how we measure the goodness, then in the model’s fit method, there is an argument called “validation_data” for which we provide either of the following two: (i) the cv set or (ii) a holdout set split out from the training set.

There is another argument in the fit method called “callbacks” which is where we specify the EarlyStopping.

Then during the training, you can expect to see how the validation set’s loss is changing in addition to the training set’s loss. The training algorithm will stop once the EarlyStopping condition is satisfied.