Greetings!!
I am trying to develop an image sentiment identification (to start with, only 2 sentiments - happy or sad) system through transfer learning by using InceptionResNetV2 as the base model. I have given the code below. (I got this approach for transfer learning from the book Deep Learning with Python by Chollet.)
I found InceptionResNetV2 to have the best validation accuracy for image classification based on the approach described by Marie Stephen Leo (How to Choose the Best Keras Pre-Trained Model for Image Classification | by Marie Stephen Leo | Towards Data Science).
However, the validation accuracy for sentiment identification is stuck around 81%. I have tried multiple things such as adding dropout, increasing the number of epochs, increasing/decreasing number of nodes in the last layer between 64 and 512, curating the training/validation dataset, etc.
Please advise how I can solve this. Thank you!
conv_base = tf.keras.applications.InceptionResNetV2(
include_top=False,
weights='imagenet',
input_shape=(IMAGE_HEIGHT,IMAGE_WIDTH,3),
)
conv_base.trainable = False
inputs = tf.keras.Input(shape=(IMAGE_HEIGHT,IMAGE_WIDTH,3))
x = data_augmentation(inputs)
x = tf.keras.applications.inception_resnet_v2.preprocess_input(x)
x = conv_base(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(64)(x)
x = tf.keras.layers.Dropout(0.5)(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)