How to # Use accuracy as evaluation metric in the last exercise of the 2nd program. assign. of Wk 2?
I tried metrics = tf.keras.metrics.Accuracy() but it did not work. The Grader returns “TypeError: ‘Accuracy’ object is not subscriptable”
I have just figured it out. But I am still in the dark re: the syntax: why use [ … ] as if assigning the ‘metrics’ variable a list with a single string? (metrics = [‘accuracy’]).
Hi @Robert_Ascan ,
The ‘metrics’ parameter can take one or more metrics depending on the needs of the model.
From keras model compile we can read the definition for this metrics parameter of the compile function:
List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a
tf.keras.metrics.Metricinstance. Seetf.keras.metrics. Typically you will usemetrics=['accuracy']. A function is any callable with the signatureresult = fn(y_true,y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such asmetrics={'output_a':'accuracy', 'output_b':['accuracy', 'mse']}. You can also pass a list to specify a metric or a list of metrics for each output, such asmetrics=[['accuracy'], ['accuracy', 'mse']]ormetrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one oftf.keras.metrics.BinaryAccuracy,tf.keras.metrics.CategoricalAccuracy,tf.keras.metrics.SparseCategoricalAccuracybased on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via theweighted_metricsargument instead.
Thanks @juan_olano for the prompt reply! ![]()