How can Dense layer can accept varying number of dimensions?

Hello, In NLP Course 3 -Week 1 -Assignment

Adding words(“nice”) to sample sentence as “Nice nice nice It’s such a nice day, think i’ll be taking Sid to Ramsgate fish and chips for lunch at Peter’s fish factory and then the beach maybe”
or directly giving a np.array([ 583 583 583 44 313 428 349 2 2 794 3124 120 2 794
6442 1625 336]) to model does not yield error even if the length is more than 15(padded_length).

I see that output of mean layer is batch_size x input_seq_length .
How can Dense layer can accept varying number of dimensions(input_seq_length )?

By the way, Although all tests are passed and val accuracy is %99.25, for the sample sentence below, model predicts as “NEGATIVE”. Outputs are as [[-0.27135152 -1.4369497 ]].

sentence = “It’s such a nice day, think i’ll be taking Sid to Ramsgate fish and chips for lunch at Peter’s fish factory and then the beach maybe”

Thank you in advance

1 Like

Hi onertartan,

Here’s how I understand it.

The model does not yield an error because the Embedding layer has dimensions vocab_size by d_feature. The tokenizer (TweetTokenizer from nltk, used in process_tweet in utils.py) reduces all input to a dimension of less than or equal to vocab_size, so no matter how many words you add to the input, the model will always be able to handle this.

As for your comment about the sample sentence prediction, I agree this is unexpected. It will be better to use a transformer, as will be discussed in the next course.