Hi all,
I am training my own dataset with 5 classes using generator :
train_datagen = ImageDataGenerator(**augumentation2)
train_generator = train_datagen.flow_from_directory(directory=train_dir,
batch_size=10,
class_mode=‘categorical’,
target_size=(300, 300))
val_datagen = ImageDataGenerator(rescale=1 / 255)
val_generator = val_datagen.flow_from_directory(directory=val_dir,
batch_size=10,
class_mode=‘categorical’,
target_size=(300, 300)
)
test_datagen = ImageDataGenerator(rescale=1 / 255)
test_generator = test_datagen.flow_from_directory(directory=test_dir,
batch_size=5,
class_mode=‘categorical’,
target_size=(300, 300),
shuffle=False)
Found 832 images belonging to 5 classes.
Found 220 images belonging to 5 classes.
Found 60 images belonging to 5 classes.
{‘para’: 0, ‘spino’: 1, ‘stego’: 2, ‘trex’: 3, ‘velo’: 4}
I am training it on different pretrained models:
mobile_net_v2 = tf.keras.applications.MobileNetV2(input_shape=(300, 300,3),
include_top=False,
weights=‘imagenet’)
inception_resnet_v2 = tf.keras.applications.InceptionResNetV2(input_shape=(300, 300,3),
include_top=False,
weights=‘imagenet’)
efficient_net_b7 = tf.keras.applications.EfficientNetB7(input_shape=(300, 300,3),
include_top=False,
weights=‘imagenet’)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.0001),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=[‘accuracy’],
)
epochs = 10
model.fit(train_generator, epochs=epochs, validation_data=val_generator)
model.save(f’/content/drive/MyDrive/saved_model/{base_model.name}/{datetime.now()}',save_format=“h5”)
Epoch 1/10
84/84 [==============================] - 134s 1s/step - loss: 1.2230 - accuracy: 0.5745 - val_loss: 0.8257 - val_accuracy: 0.8273
Epoch 2/10
84/84 [==============================] - 86s 1s/step - loss: 0.5073 - accuracy: 0.8714 - val_loss: 0.3270 - val_accuracy: 0.9273
Epoch 3/10
84/84 [==============================] - 86s 1s/step - loss: 0.2705 - accuracy: 0.9267 - val_loss: 0.2395 - val_accuracy: 0.9318
Epoch 4/10
84/84 [==============================] - 86s 1s/step - loss: 0.1899 - accuracy: 0.9507 - val_loss: 0.2366 - val_accuracy: 0.9273
Epoch 5/10
84/84 [==============================] - 85s 1s/step - loss: 0.1219 - accuracy: 0.9724 - val_loss: 0.1830 - val_accuracy: 0.9455
Epoch 6/10
84/84 [==============================] - 85s 1s/step - loss: 0.1456 - accuracy: 0.9543 - val_loss: 0.1219 - val_accuracy: 0.9636
Epoch 7/10
84/84 [==============================] - 85s 1s/step - loss: 0.1193 - accuracy: 0.9688 - val_loss: 0.1238 - val_accuracy: 0.9500
Epoch 8/10
84/84 [==============================] - 85s 1s/step - loss: 0.1142 - accuracy: 0.9675 - val_loss: 0.0927 - val_accuracy: 0.9727
Epoch 9/10
84/84 [==============================] - 85s 1s/step - loss: 0.0906 - accuracy: 0.9760 - val_loss: 0.0912 - val_accuracy: 0.9591
Epoch 10/10
84/84 [==============================] - 85s 1s/step - loss: 0.0924 - accuracy: 0.9760 - val_loss: 0.0969 - val_accuracy:
Training process is look fine loss is decreasing and accuracy is ok probably I am over fitting.
My problem is when I want to predict on my saved model test predictions are around 40 per but when I want to predict on val or train it is like 20% and during training it has 90% accuracy. What I am missing ? How predictions on seen data can be worse ? It is same for all models. Thanks for help Miro
files=test_generator.classes
predictions =model_trained.predict(test_generator)
accuracy 0.4666666666666667
files=val_generator.classes
predictions =model_trained.predict(val_generator)
22/22 [==============================] - 15s 678ms/step
accuracy 0.18636363636363637
files=train_generator.classes
predictions =model_trained.predict(train_generator)
84/84 [==============================] - 70s 827ms/step
accuracy 0.20673076923076922