Call_backs week 2

I have been experimenting with the metrics parameter passed onto the compile function. I have not been able to pass on any metric other than accuracy and make it work for both the fashion MNIST dataset and the MNIST. I tried
'loss, ‘sparse_categorical_accuracy’, ‘precision’, ‘recall’,
tf.keras.metrics.Precision(name=‘precision’),
tf.keras.metrics.Recall(name=‘recall’),
tf.keras.metrics.Precision(), tf.keras.metrics.Recall()

is there any reason for this, or I am not writing it properly?

model.compile(optimizer=tf.optimizers.Adam(),
loss=‘sparse_categorical_crossentropy’,
metrics=[‘accuracy’])

These other metrics (such as precision and recall and the like) are typically only used in highly skewed (i.e. sparse) data sets - such as where you have maybe 1% of one class and 99% of the other. They’re used to compute a metric like the F1 score.

Right, that is why I was experimenting with passing them in case in a different problem, such as skewed data accuracy and loss, are not good indicators of model performance. So I am wondering why they don’t work

When using metrics like Precision and recall, the true label needs to be one-hot encoded. Please use the line below to do the same:

y_train = tf.keras.utils.to_categorical(y_train, num_classes=tf.unique(y_train).y.shape[0])

Don’t forget to update the loss function before fitting the data.

1 Like

Is there also an issue here with the fact that MNIST is a multi-class dataset?

Normally, when I see precision and recall, there is a single class. Precision tells me the frequency with which the classifier is correct when it claims to have spotted a particular class (true positives/ (true positives + false positives). Recall tells me what fraction of the samples of a particular class are identified by the classifier (i.e. true positives / (true positives + false negatives).

So, although there will be a single accuracy (i.e. correct classification/ all classifications) for a given dataset, each class should have its own precision & recall responses (machine learning - How do you calculate precision and recall for multiclass classification using confusion matrix? - Cross Validated, http://rali.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf)