Don't we need padding and truncating parameters for testing_sequences?

Hi,

Why don’t we put padding and truncating parameters for tesitng_sequences like training one?

Generate and pad the training sequences

sequences = tokenizer.texts_to_sequences(training_sentences)
padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type, padding=padding_type)

Generate and pad the test sequences

testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length)

Thanks ahead!

Hey,
The padding and truncating should be the exactly same for both training and testing test if I am not wrong
May be the parameters passed here for padding and truncating are the default ones and so they are skipped in the testing split
Can you please confirm if I am right or not @balaji.ambresh
Thanks and Regards,
Mayank Ghogale

You are right, @MayankGhogale. trauncating and padding must be the same for both train and validation sentences.

@wokee. Seems like you are referring to week 2 C3_W2_Lab1_imdb.ipynb. Thanks for pointing this out.
In this assignment, trunc_type='post'. The default value for truncating in pad_sequences is pre.

@MayankGhogale Could you please file a ticket? Thanks.

Hi balaji, that’s correct. Thanks everyone for clearing up my doubts.

I will surely file a ticket @balaji.ambresh sir

Hey Guys, I noticed this issue too, and was about to post the same question on the forums when I came across this post. Looks like it is still an issue at the time of me writing this reply - did a ticket get raised?

@WombleNumber1 A ticket has been filed. The staff should fix this soon.