Backprop with shared layers

In the Jazz assignment (Course 5, Week 1), the LSTM layer is shared across the time steps. Does this mean that each backward pass updates the weights of this layer multiple times?

Hi,
RNN learning weights won’t be updated until backward pass is completed in each iteration. In fact, gradients are accumulated at every time step, and the weights are updated through these accumulated gradients at the end of the iteration. You can see how it works in Week 1 Programming Assignment: Building your Recurrent Neural Network - Step by Step, Exercise 7, 8.