So, why do we need back propogation?

Do I understand correct that we need back propogation only to calculate derivatives for future forward propogation of network? To do it through derivatives that will be faster — not from neurons formulas (that is longer)?

What is “neurons formulas”? To my knowledge, we have only one way to do back-prop and that is “derivative”. We can avoid derivative if we decide to run our model for only one iteration (forward-prop only) but then how it will perform with one iteration only? Poor.

We find derivatives, then update the parameters (W and b), and then feed the updated values of W and b into forward-prop in the 2nd, 3rd, 4th… iterations. Finding derivatives at each iteration and updating the parameters at each iteration. That is the process.

Best,
Saif.

2 Likes

Not correct. We use backpropagation to calculate the gradients. The gradients are used to update the weights so we find the minimum cost.

1 Like

so, we use it to ease not only future forward propogations, but for easing of gradients job too, correct?

You use forward propagation to get the predicted outputs.
Then you use backpropagation to compute the gradients.

I don’t know what “easing” has to do with it.

1 Like