Question about optimizing the learning rate by fitting the NN with LearningRateScheduler as callback

Hello Rodolfo,

The reasoning behind 100 epochs is not only based on Learning rate only. It is combination of choosing the appropriate learning rate with the precise cost function and getting the best accuracy. So in some model epoch of 100 is required based on the custom loss, training model as mentioned by you the parameters.

A good way to determining a more precise learning rate is to choose the most random learning rate closer to 0 and check for the cost function, loss and accuracy. Say you chose 0.001 learning rate for a model, then try using 0.003 learning rate and check for the accuracy, cost function and loss.

running few epochs will not give the precise accuracy for model training and it wouldn’t be a precise method of checking if the model is proper as usually in the beginning if you notice the cost of learning first increases and then goes down after a point eventually reaching to minimum or closer to 0.

It is basically the criteria of choosing learning rate randomly rather than choosing 0. The rate of learning or speed at which the model learns is controlled by the hyperparameter. It regulates the amount of allocated error with which the model’s weights are updated each time they are updated , such as at the end of each batch of training instances.

Please refer this Question regarding learning rate graph from W2 logistic regression lab - #3 by Deepti_Prasad

Hope it clarifies your doubt, otherwise do let us know. Keep Learning !!

Regards
DP