C4W1: Quick question - Number of LSTM units in the model

Hello!

The general answer is that the number of LSTM units (hidden states) in an LSTM layer does not necessarily have to be equal to the word embedding size.

The approach assumed in the content of this is week is the one demonstrated in the Seq2seq video

With regards to your question and the way it is depicted above, a possible point of confusion could be the fact that:

word embedding size = LSTM input size
but
LSTM input size ≠ number of LSTM units (not necessarily at least, which is the answer to your question)

The choice to set the number of LSTM units = LSTM input size (= word embedding size), could be made for simplicity and compatibility between layers (no need for additional transformation or reshaping of the data between layers).

Some more comments on this issue you can find here (the Trax framework mentioned has been replace by TF, but you may still find it useful):

2 Likes