I am getting error in tokenizer

I used
tokenizer = Tokenizer(oov_token=oov_token) tokenizer.fit_on_texts(train_sentences)

train_sentences should be made of strings and not integers. Please share the output of train_sentences[:5]

How is this picture different from the previous one?

I got the same output using train_sentences[:5 ]

I asked for the output of print(train_sentences[:5])

Please click my name and message your notebook (ipynb format) as an attachment.

In parse_data_from_file, you are appending labels to sentences via sentences.append(labels). This is incorrect.

On a related note, you should not use constants in train_val_split to calculate train_size but instead use number of rows and training_split to compute this value.

Can u please explain in detail,I am unable to understand??

Sure. Each row has 2 features of interest to us: label and sentence. You’ve appended the label to the labels list. What should you append to sentences ?

Can u tell me what should I append??

If you read the markdown before the exercise and the starter code for the exercise, please become familiar with programming, specifically in python before moving forward.