Thank you for the tip, I tried to do:
conv2d_11 (Conv2D) (None, 148, 148, 128) 3584
max_pooling2d_9 (MaxPooling2D) (None, 74, 74, 128) 0
conv2d_12 (Conv2D) (None, 72, 72, 256) 295168
max_pooling2d_10 (MaxPooling2D) (None, 36, 36, 256) 0
conv2d_13 (Conv2D) (None, 34, 34, 512) 1180160
max_pooling2d_11 (MaxPooling2D) (None, 17, 17, 512) 0
conv2d_14 (Conv2D) (None, 15, 15, 512) 2359808
max_pooling2d_12 (MaxPooling2D) (None, 7, 7, 512) 0
conv2d_15 (Conv2D) (None, 5, 5, 512) 2359808
flatten_2 (Flatten) (None, 12800) 0
dense_14 (Dense) (None, 1024) 13108224
dense_15 (Dense) (None, 1024) 1049600
dense_16 (Dense) (None, 1024) 1049600
dense_17 (Dense) (None, 1024) 1049600
dense_18 (Dense) (None, 1024) 1049600
dense_19 (Dense) (None, 1024) 1049600
dense_20 (Dense) (None, 1) 1025
I also use
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
fill_mode=‘nearest’)
train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,
batch_size=4,
class_mode=‘binary’,
target_size=(150, 150))
the training result still seems to be lacking…
every iteration takes forever, I ran out of colab credits yesterday, trying it again today
Epoch 1/15
5625/5625 [==============================] - 286s 50ms/step - loss: 0.6941 - accuracy: 0.4974 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 2/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.6933 - accuracy: 0.4996 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 3/15
5625/5625 [==============================] - 285s 51ms/step - loss: 0.6932 - accuracy: 0.5076 - val_loss: 0.6938 - val_accuracy: 0.5000
Epoch 4/15
5625/5625 [==============================] - 289s 51ms/step - loss: 0.6780 - accuracy: 0.5861 - val_loss: 0.6638 - val_accuracy: 0.6140
Epoch 5/15
5625/5625 [==============================] - 286s 51ms/step - loss: 0.6594 - accuracy: 0.6443 - val_loss: 0.6549 - val_accuracy: 0.5564
Epoch 6/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6414 - accuracy: 0.6717 - val_loss: 0.5643 - val_accuracy: 0.7296
Epoch 7/15
5625/5625 [==============================] - 284s 50ms/step - loss: 0.6244 - accuracy: 0.6902 - val_loss: 1.0983 - val_accuracy: 0.7240
Epoch 8/15
5625/5625 [==============================] - 284s 51ms/step - loss: 0.6083 - accuracy: 0.7037 - val_loss: 0.5676 - val_accuracy: 0.7384
Epoch 9/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.5976 - accuracy: 0.7140 - val_loss: 0.5288 - val_accuracy: 0.7504
Epoch 10/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.5920 - accuracy: 0.7188 - val_loss: 0.5927 - val_accuracy: 0.7552
Epoch 11/15
5625/5625 [==============================] - 281s 50ms/step - loss: 0.5923 - accuracy: 0.7297 - val_loss: 0.5593 - val_accuracy: 0.7548
Epoch 12/15
5625/5625 [==============================] - 286s 51ms/step - loss: 0.5929 - accuracy: 0.7339 - val_loss: 0.5858 - val_accuracy: 0.6976
Epoch 13/15
5625/5625 [==============================] - 296s 53ms/step - loss: 0.5991 - accuracy: 0.7272 - val_loss: 0.5166 - val_accuracy: 0.7588
Epoch 14/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6027 - accuracy: 0.7287 - val_loss: 0.6152 - val_accuracy: 0.6896
Epoch 15/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6036 - accuracy: 0.7215 - val_loss: 0.5121 - val_accuracy: 0.7628
So far I can’t see how can I get from 50% accuracy to > 80% accuracy in less than 15 iterations…
Based on the network structure I have, maybe you could provide some more hints I could use?