Why do we need to have a validation set for training?

So it is like learning from examples then testing your learning from practice problem. If there is error or mismatch, again learn from the training example an test your learning from practice problem until you are confident enough to actually give the exam (test data). If you are passed then it is like you are graduated and now you need to implement your learning on the real world data (ofcourse it can be wrong not 100% accurate). Am i thinking right @gent.spah ?

Isn’t hyper params set before training usually at model.compile and keras.Sequential? Then how come it is changed while learning? I know LR is changed by Adam, but lets not discuss that