Validation and training accuracy just cant reach 80%

Please fix the split_data function. It’s buggy.
See this thread for a hint on when to compute the split size.

When training accuracy is way higher than validation accuracy, there’s overfitting in play. Here are a few ways to tackle this:

  1. The transformations you want to perform on the training data. When validation accuracy is way below training accuracy, it helps to expose the NN to a wider distribution of the training data.
  2. Adding tf.keras.layers.Dropout
  3. Tuning the learning rate (hint: reduce it).
  4. Tuning the batch size
  5. Changing NN architecture.

A good amount of these are covered in deep learning specialization courses 2 and 3.

1 Like