Building_a_Recurrent_Neural_Network_Step_by_Step-Exercise 6 - rnn_backward

Please have a look at this diagram from the notebook and consider the contents of the green rectangle that I highlighted:

The implication is that the input for the da_next argument to rnn_cell_backward is the sum of two gradients: the one from the path out to \hat{y}^{<t>} and the one coming from the previous step (of course it’s the next step in forward propagation with t + 1, but we’re going backwards here).