How data is processed after lossfunction in optimizer

Lossfunction give how good or bad the guess is? Then how optimizer thinks over it

The optimizer uses “back propagation” based on the gradients of the cost/loss function to adjust the parameters in the direction of a better (lower cost) solution. This is an iterative process that is called “training”. If you are not familiar with back propagation and Gradient Descent, you might want to consider taking the DLS specialization first before you start on the TensorFlow courses. The TF courses pretty much assume you know how deep learning works and treat the algorithms themselves as “black boxes” and show you how to assemble them into solutions. In other words, the TF courses show you the “How”, but they assume you already know the “Why” part of things. If “Why” is your question, this is the wrong place to start. DLS will show you what all this is about and why it works, then TF is a more advanced and efficient way to build those solutions.

1 Like