Hi,
In W3 HW assignment the translation model consisted of
- one pre-bidirectional LSTM layer (encoder) and one post-LSTM layer (decoder).
With one layer in encoder and decoder the context (C) is calculated (in the decoder loop) using all hidden states from encoder at every timestep i and with S (hidden state from decoder initialized to zero at S<0>)
What if I wanted to add an additional layer to the encoder and decoder;
I am curious to know how one would one connect say an additional LSTM layer in both the encoder and decoder.
Specifically what should be the inputs passed, please refer to attached picture.
Thanks in advance,
-Ali