Course 4 Week 2 Transfer Learning Alpaca - Exercise 3

hey, I am using the following code:

for layer in base_model.layers[:fine_tune_at]:
layer.trainable = fine_tune_at

loss_function = tf.keras.losses.BinaryCrossentropy(from_logits=True, name=‘binary_crossentropy’)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1*base_learning_rate)
metrics = tf.keras.metrics.Accuracy()

But it is giving me the following error:

TypeError Traceback (most recent call last)
in
3 assert type(optimizer) == tf.keras.optimizers.Adam, “This is not an Adam optimizer”
4 assert optimizer.lr == base_learning_rate / 10, “Wrong learning rate”
----> 5 assert metrics[0] == ‘accuracy’, “Wrong metric”
6
7 print(’\033[92mAll tests passed!’)

TypeError: ‘Accuracy’ object is not subscriptable

Need help in solving this error, Thanks in advance.

Hi @Haroon30 and welcome to Discourse. Can you add the line you wrote to compile the model?

I used the code already provided in the assignment:

model2.compile(loss=loss_function,
optimizer = optimizer,
metrics=metrics)

this works fine, but when I run the following code, it gives me error on accuracy:

assert type(loss_function) == tf.python.keras.losses.BinaryCrossentropy, “Not the correct layer”
assert loss_function.from_logits, “Use from_logits=True”
assert type(optimizer) == tf.keras.optimizers.Adam, “This is not an Adam optimizer”
assert optimizer.lr == base_learning_rate / 10, “Wrong learning rate”
assert metrics[0] == ‘accuracy’, “Wrong metric”

print(’\033[92mAll tests passed!’)

No need for tf library, what you need is just “accuracy”.

1 Like

I am doing metrics=“accuracy”, but am still receiving an error - any tips?

This seems to do the trick:

metrics = [“accuracy”]

1 Like

hi, did you get a way out ?

Just use accuracy as a string in a list
#Use accuracy as evaluation metric
metrics=['accuracy']

The documentation from the compile method defines the argument of metric as:
List of metrics to be evaluated by the model during training testing. Each of this can be a string (name of a built-in function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy'].....

hey, I tried this. It seems like it is a tensorflow bug

yes by correcting the accuracy and removing name = ‘binary_crossentropy’ from loss_function, The code runs fine