Lossfunction give how good or bad the guess is? Then how optimizer thinks over it

The optimizer uses â€śback propagationâ€ť based on the gradients of the cost/loss function to adjust the parameters in the direction of a better (lower cost) solution. This is an iterative process that is called â€śtrainingâ€ť. If you are not familiar with back propagation and Gradient Descent, you might want to consider taking the DLS specialization first before you start on the TensorFlow courses. The TF courses pretty much assume you know how deep learning works and treat the algorithms themselves as â€śblack boxesâ€ť and show you how to assemble them into solutions. In other words, the TF courses show you the â€śHowâ€ť, but they assume you already know the â€śWhyâ€ť part of things. If â€śWhyâ€ť is your question, this is the wrong place to start. DLS will show you what all this is about and why it works, then TF is a more advanced and efficient way to build those solutions.