C3W2 - IOPub issue on fit_label_encoder function

Hello, I have a problem with this C3W2 assignment. The first problem is in the graded fit_label_encoder function. At the beginning, I had already combined 2 datasets using concatenate. Then, I also used the stringlookup function by adding the parameter for the number of oov equal to 0. Finally, I also fit the layer to the combined data that I had initialized at the beginning. Then when I ran the next code to check, an IOPub issue came up. Strangely, when I ran the unit test in the next cell, it showed passed and seemed to have no issues.


P.S. the image above isn’t the important code that used to graded function. that image is just a proof that i issued IOPub even though i passed the unittest

1 Like

hi @xvalnsz

First of all thank you for following community guidelines as only codes are asked to shared via DM by mentors if they want to check for probable error in your codes. As you are using first time discourse community, I should also probably give a heads up on to share any grade cell codes here as it is against community guidelines.

Now comes to your error.

An IOPub data rate exceeding points you to check if your codes are correctly recalled for previous grade cells even though you have passed the unittests.

Kindly check if the dataset codes were recalled as per the right function recall for text i.e. text_vectorizet and for labels function recall would be label_encoder (at this place most learner common mistake was missing an extra tuple when recalling dataset lambda. So make sure you tuple the lambda which includes the text and labels.

If this was recalled as stated then next check point is to check if fit vevtorizer and fit label encoder cell were not hard_coded(per se recalling a code step into multiple step, using global variable instead of local variables or assigned call arguments for a particular grade cell)

if the above was done correctly, then last checkpoint is train_val_dataset codes, the hint to the correction you can find in ungraded labs but otherwise if I have to explain how to write the codes for this would be first to determine the train_size and validation size using the length function to the data argument making sure to apply the int function.

Next using the train_size and validation_size determine first the train_texts, validation_texts, train_labels and validation_labels.
Then create the training_dataset and validation_dataset using the tf.data.Dataset.from_tensor_slices to the (train_text, train_labels) and (validation_texts, validation_labels)

Let me know if this helps you debug.

Regards
DP

Hi @Deepti_Prasad
Thank you and really big appreciate for answering my question and resolving my issue with a complete and clear explanation. Just so you know, this answer was very helpful to me. Finally, I found my mistake, which was in the train_val_dataset code. The parameters I defined were reversed between train labels and train text in the tf.data.Dataset.from_tensor_slices function. I realized that the order of parameters that should be initialized first is features(text).

2 Likes