Impropper results in pretrained model

I used a mobile net v2 pretrained model and used Keras for preprocessing images with a batch size of 32. as shown below:


The results were good in this case and were improving but when I preprocessed the data using NumPy and one hot encoding the y_train. I received very bad results which were not improving at all. The accuracy and the loss remained constant even after tuning. What could be the reason behind this?? Is it due to the data set or is something wrong in the code??

What have you done to check/confirm that the behavior of your numpy code is equivalent to the TF preprocessing code? The evidence seems to suggest that they are different. :nerd_face: So that seems like the thing to investigate. E.g. are you sure you normalized the pixel data to be in the range [-1,1]?

I checked the dataset I made with the labels and they were fine. The only difference I see is that I used one hot encoding in the y_train but I guess the Keras preprocessing doesn’t do that…so can it be the reason for such results???

I don’t know what the Keras preprocessing does to the labels. Why don’t you print out some of the values and see? Note that you can use either “one hot” or “categorical” representation for labels, as long as you are also careful to select the appropriate version of the cross entropy loss function. One of them supports the “one hot” inputs and the other supports categorical. Check the TF docs for SparseCategoricalCrossentropy versus CategoricalCrossentropy.


The code I used is given above in which the inputs are NumPy arrays. Using these arrays gives poor results as compared to the dataset created by the Keras preprocessing…Woun’t they get normalized using the code below?

The fit model can accept both NumPy arrays and tensors.so what might be the problem sir?