C5 W3A1 the role of RepeatVector

Hello,

I’m trying to better understand what’s happening under the hood. So the function one_step_attention() takes in the pre-attention hidden states, and the previous state of the post-attention LSTM to decide on which hidden states should be focused on. I understand what each of the layers that make up the attention cell is doing but I’m not clear what the reason for RepeatVector() is. I understand that the input s_prev isn’t the correct shape to concatenate with a but why is the correct approach here just to duplicate the vector Tx times?

We need Tx copies of the hidden state to concatenate with each of the Tx cell states. This is shown in the right side of Figure 1 in the notebook.