W1, A1, Ex. 6, Vanishing Gradients

Hello! The results of the gradients in rnn_backward are vanishing. The first two gradients are correct, but after that they vanish to zero.

To define gradients, I have called rnn_cell_backward(da[:,:,t] + da_prevt, caches[t]) as I have thought and as I have seen in other posts, so the error must be somewhere else.

Thanks!

gradients["dx"][1][2] = [-2.07101689 -0.59255627  0.02466855  0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.31494237512664996
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 0.0
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 0.0
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [0.]
gradients["dba"].shape = (5, 1)

Miquel has been helped. Implementation of the increments of the global derivatives was wrong.