Hi,
I was working on a problem on Kaggle and I wanted to experiment with playing around with the Learning-Rate for an XGBoost model.
I was surprised to see that setting the learning rate to a very low number (ex: 0.0001) would result in a significantly lower score. I expected to face a longer training time but not a change in score.
Am I correct there?
Did you also change the number of iterations? Those two hyperparameters interact in a pretty clear way. Although I should be careful here, since I know nothing about XGBoost models. 
Yeah I also believe it has to do with the number of iterations since a lower learning rate would also take much more time and iterations to achieve a better result.
Thanks Paul and Gent! The problem seems to be because of the “n_estimators” parameter. which is the number of trees we need in the model.
I am also new to XGBoost but I would say that can be thought about as the number of iteration.
Thanks for the help here!
1 Like