hi
once a model is selected with hyperparameters, should we re-train with all the dataset before sending it to real world?
I have read this in a book and it looked logical to me since more data means less overfitting.
I wonder your insights on the topic
here is the quote from the book(Corey Wade-hands on gradient boosting)
'When testing, it’s important not to mix and match training and test sets. After a final model has been selected, however, fitting the
model on the entire dataset can be beneficial. Why? Because the goal is to test the model on data that has never been seen and
fitting the model on the entire dataset may lead to additional gains in accuracy.