NLP C4_W1: Encoder output_dim

There is no mention about the output_dim of Embedding layer in both Encoder and Decoder and I just randomly use the units parameter available at the constructor of the classes. Although it passes the Encoder unit tests, I don’t understand why that magic number works as I don’t see any logic for this 1:1 mapping between #units of LSTM and the output_dim of the Embedding layer which feeds forward to the LSTM layer. Any thought on this?

1 Like

The significance is not about passing the output dim from embedding layer to lstm but to use these self recall function in def call to pass the sentence to translate using their respective embedding layer as well as output dim through rnn (check the second part of the grade function)

By definition, the input_dim of the Embedding layer is the size of the vocabulary and the output_dim is the size of the embedding. This information is missing from the notebook. What should be the value here?

Refer to page 47 of C4_W1.pdf

1 Like