In the assignment last step( Exercise 5 - constructing the Emojify_V2 function), we were supposed to construct multiple(2) layers of LSTM as shown in the figure
In have coded correctly and passed all the tests but just have some unclear points about the structure.
- to confirm the the input shape
The model takes as input an array of sentences of shape (
max_len, ) defined by
input_shape—> m should be the batch size, max_len = Tx, and last dim should be the embedding dim, if we are using 50 embedding dim, input will be m,Tx,50?
- The figure shows that we have Tx LSTM units in the first and second layer, but in the code function, only one LSTM with 128 units are coded. So my confusion is that does the keras layer automatically unroll the single LSTM unit Tx times (since they are sharing the same weights in the same layer)? Because I didnt see any for loop like for i in range(Tx) to loop all the Tx inputs
Any clarification appreciated, thanks in advantage!