Cost function added to backpropagation

In first programming assigment from week 1, there is RNN with many-to-many relation, where Tx = Ty. I fully understand backprop that is provided in notebook, but I have no idea how Loss/Cost Function at each time step t should affect parameters - Waa, Wax, Wya, ba, ba, by. At the end of timestep, we calculate da_prev as shown on a figure below:

but should not da_prev be also affected by cost function of y^(t-1) ? If so, how to calculate “true da_prev”?

The loss function doesn’t directly influence the gradients.

It’s the partial derivative of the loss function that gives the gradients that are used in backpropagation.

Yes, you are right. So how the partial derivative of the loss function affects gradients?

Ok. The answer was in the picture. Total da and timestep t equals sum of da from cost function at t and da from future (t+1) timestep. Issue can be closed