The assertion block for the fine-tuning model is throwing an error for the accuracy object NOT being subscriptable. If the object is not subscriptable and the assertion block throws an error if it is NOT then either the tf.keras.metric.Accuracy() method is not outputting the wrong type of object or the assertion block to check code is incorrect. Which is it?


assert type(loss_function) == tf.python.keras.losses.BinaryCrossentropy, “Not the correct layer”

assert loss_function.from_logits, “Use from_logits=True”

assert type(optimizer) == tf.keras.optimizers.Adam, “This is not an Adam optimizer”

assert == base_learning_rate / 10, “Wrong learning rate”

assert metrics[0] == ‘accuracy’, “Wrong metric”

print(‘\033[92mAll tests passed!’)


TypeError Traceback (most recent call last)
3 assert type(optimizer) == tf.keras.optimizers.Adam, “This is not an Adam optimizer”
4 assert == base_learning_rate / 10, “Wrong learning rate”
----> 5 assert metrics[0] == ‘accuracy’, “Wrong metric”
7 print(‘\033[92mAll tests passed!’)

TypeError: ‘Accuracy’ object is not subscriptable

These courses were last updated in a major way in 2021 and literally thousands of students have been through them since then, so you should assume that the test cases have been debugged by this point.

That error probably means you are using the wrong syntax for specifying the metrics argument to the compile() method. It is a python list and the elements of the list can be either string names or actual references to instantiated metric functions. Here’s the top level documentation for Keras Model. It looks like the test case is written in a way that forces you to use the “string name” syntax. But the key point is that it’s a list of string names not just a single string name, right? Or if you were using the “instantiated function” approach, it would be a list of instantiated functions, even if the list has only one entry.