I have been experimenting with the metrics parameter passed onto the compile function. I have not been able to pass on any metric other than accuracy and make it work for both the fashion MNIST dataset and the MNIST. I tried
'loss, ‘sparse_categorical_accuracy’, ‘precision’, ‘recall’,
tf.keras.metrics.Precision(name=‘precision’),
tf.keras.metrics.Recall(name=‘recall’),
tf.keras.metrics.Precision(), tf.keras.metrics.Recall()
is there any reason for this, or I am not writing it properly?
These other metrics (such as precision and recall and the like) are typically only used in highly skewed (i.e. sparse) data sets - such as where you have maybe 1% of one class and 99% of the other. They’re used to compute a metric like the F1 score.
Right, that is why I was experimenting with passing them in case in a different problem, such as skewed data accuracy and loss, are not good indicators of model performance. So I am wondering why they don’t work
Is there also an issue here with the fact that MNIST is a multi-class dataset?
Normally, when I see precision and recall, there is a single class. Precision tells me the frequency with which the classifier is correct when it claims to have spotted a particular class (true positives/ (true positives + false positives). Recall tells me what fraction of the samples of a particular class are identified by the classifier (i.e. true positives / (true positives + false negatives).