Wk2, prog. assign. 2: How to set accuracy as the evaluation metric?

How to # Use accuracy as evaluation metric in the last exercise of the 2nd program. assign. of Wk 2?
I tried metrics = tf.keras.metrics.Accuracy() but it did not work. The Grader returns “TypeError: ‘Accuracy’ object is not subscriptable”

I have just figured it out. But I am still in the dark re: the syntax: why use [ … ] as if assigning the ‘metrics’ variable a list with a single string? (metrics = [‘accuracy’]).

Hi @Robert_Ascan ,

The ‘metrics’ parameter can take one or more metrics depending on the needs of the model.

From keras model compile we can read the definition for this metrics parameter of the compile function:

List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy'] . A function is any callable with the signature result = fn(y_true,y_pred) . To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a':'accuracy', 'output_b':['accuracy', 'mse']} . You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']] . When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.


Thanks @juan_olano for the prompt reply! :+1:

1 Like