Now addressing to your code issue, I don’t know if you have referred the ungraded lab which tells you to use the same tf.keras.layers.TextVectorization as these codes were already given at the beginning of the codes.
Now comes to adding other parameters to this Tokenizer class as per instruction, instruction mentions you to use vocab_size and assign tells you to use standardise_func, vocab_size and runcate the output sequences to have MAX_LENGTH
length.
Also not to forget
emember to use the custom function standardize_func
to standardize each sentence in the vectorizer. You can do this by passing the function to the standardize
parameter of TextVectorization
.
Don’t add any other parameter to the vectorizer other than mentioned by the instructions.
then this vectorise is used to fit tokenizer to the training_sentences in the next code line.( remember the training_sentence has been recalled as train_sentences as per the arguments given in the GRADE FUNCTION: fit_vectorizer.
Extra hints would be referring ungraded labs might give where you need attention.
I am suspecting you have include a parameter=None to the vectorizer and then secondly fit tokenizer to training_sentences might need a lookup.
Let me know if you are still getting any error.
Regards
DP