When specifying the metric for for new model (model2), I first used: “metrics=[tf.keras.metrics.Accuracy()]”, but that failed to pass the subsequent test. I changed it to “metrics=‘accuracy’” and it worked. How is that correct? I looked at the documentation on keras, and they have examples of models where they pass the metrics as I did in the first place. Can someone shed some light on this?
Also, after re-training the model from the 120th layer onward (see picture attached), there is a dip in the accuracy. Is that expected? If so, what is the interpretation?
The API allows you to specify the metrics either as strings or as you originally did. I tried it and it worked your way on the first training case. But in the “fine-tuning” section, they specifically wrote a test case that checks for the “string name” version of specifying the metric. So yours is technically correct, but in that case we have to do it they way they specify.
In any training scenario, there is never any guarantee of monotonic convergence. This case is one in which there can be some anomalies, although I don’t recall previously seeing a case in which the accuracy went to 0.
Thanks. That is what I used to solve my issue, but I still cannot get my original method to work properly. If I input “tf.keras.metrics.Accuracy()”, then the program runs, but during training, it doesn’t compute the correct accuracy (it’s always 0).