Support with C4W1 assignment - NLP with attention models

When creating a post, please add:

  • Week # must be added in the tags option of the post.
  • Link to the classroom item you are referring to:
  • Description (include relevant info but please do not post solution code or your entire notebook):

Hello, I am experiencing an issue with the Decoder of the 1st assignment (C4W1)

In particular I do not match the dimensions.

In the self.embedding layer, I have given as input_dim=vocab_size, output_dim=units.

In the next layes, LSTM(units=units)…and here the dimensionality issue.

Somebody can provide support to understand please?

1 Like

Hey there @Dario_Torregrossa

The LSTM layer expects an input with 3 dimensions: (batch_size, timesteps, features). However, it seems you are passing a 4-dimensional tensor: (64, 14, 256, 256).

Also, verify that the embedding layer outputs the correct shape

2 Likes

your issue is with the initial_state being recalled as state where as you are suppose to recall it as None as per instructions.