C5W2 Emoji_v3a LSTM Error 'expected ndim=3, found ndim=2'

I’ve been stuck on this for a few hours. I feel I’m missing something obvious.

When I run the unit test for my attempt at implementing Emojify_V2 I get the following error:

ValueError: Input 0 of layer lstm_25 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [4, 2]

If I print(embeddings.shape) that returns (4, 2).
I’m guessing I either shouldn’t be passing embeddings into my LSTM object, or I need to expand its dimensions? Can anyone suggest which direction to go in please?

“embeddings” is the data that you pass to the LSTM layer. It should be set off in its own set of parenthesis, after you’ve created the LSTM layer, by using X = LSTM(…)(embeddings)

Thanks. Yes, sorry I think I’m doing that. I’m not sure how much code I’m allowed to post here, but I have:
X = LSTM(128, return_sequences=True)(embeddings)

Maybe there is an issue in your pretrained_embedding_layer().

Or maybe you don’t have the correct arguments for the Input(…) layer that creates “sentence_indices”.

Thanks for your help. I’ve got it working now. In my case my problem was well before the line with the error. I had implemented the following line incorrectly:
sentence_indices = ...
and I guess it takes a few lines to blow up.