RNN Week1 Assignment 2 gradient update

Hi,

Thanks for spending time reading my post. I have one question on how to update the gradient during backpropagation in Week 1, Assignment 2. In the ‘utils.py’, specifically in function ‘rnn_backward(X, Y, parameters, cache)’, why do we subtract 1 when updating “dy” during backpropagation? If the activation function is softmax, shouldn’t it be dy=y*(1-y)?

Thank you very much in advance for any help.

1 Like

Hello there,

The are calculating the output using cross entropy loss with softmax, and for more details I advise you to check out this post:

1 Like

Thank you so much. That’s very helpful.

2 Likes