C2_W3_hyperparameter tuning

hello in the video series of hyperparameter tuning pro andrew said that if we don’t have a lof of computational power we can train one model and by seeing its performance over time we can modify hyperparameters . my question is how we can do this in practice . Is that by saving the last learned values of our weights then stopping the training , then changing hyperparams and finally restart training the model by initializing the weights to the last training ones . is that the good process?

Yeah your idea seems good to me, you can save a model load its weights and then continue training it. The only problem is if you change the architecture of the model then those weights cannot be used anymore.

so if you want to train and tune paramaters at same time with a single model you are forced to keep the same architecture otherwise you have to train mutliple models