Model training does not stop when accuracy reaches a particular threshold

Hi,

For the week 2 assignment titled Week 2: Implementing Callbacks in TensorFlow using the MNIST Dataset

I have tried to implement the callback function for tensorflow as shown:

class myCallback(tf.keras.callbacks.Callback):
        # Define the correct function signature for on_epoch_end
        def on_epoch_end(self, epoch, logs={}):
            if(logs.get('acc') is not None and logs.get('acc') >= 0.99):
                print("\nReached 99% accuracy so cancelling training!")
                self.model.stop_training = True

Then I declare the optimizer and try to fit the model as shown:

# grader-required-cell

# GRADED FUNCTION: train_mnist
def train_mnist(x_train, y_train):

    
    

    ### START CODE HERE
    
    
    # Instantiate the callback class
    callbacks = myCallback()
    
    # Define the model
    model = tf.keras.models.Sequential([
        # YOUR CODE SHOULD START HERE
      tf.keras.layers.Flatten(input_shape=(28, 28)),
      tf.keras.layers.Dense(512, activation=tf.nn.relu),
      tf.keras.layers.Dense(10, activation=tf.nn.softmax)
        # YOUR CODE SHOULD END HERE
    ])

    # Compile the model
    model.compile(optimizer='adam',                   
                  loss='sparse_categorical_crossentropy',                   
                  metrics='accuracy')     
    
    # Fit the model for 10 epochs adding the callbacks
    # and save the training history
    history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

    ### END CODE HERE
    
    return history

The issue being while training my model it does not stop when the accuracy reaches 99 percent it goes on for all the 10 epochs as shown:

Epoch 1/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.2026 - accuracy: 0.9404
Epoch 2/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0820 - accuracy: 0.9747
Epoch 3/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0515 - accuracy: 0.9843
Epoch 4/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0363 - accuracy: 0.9880
Epoch 5/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0270 - accuracy: 0.9919
Epoch 6/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0203 - accuracy: 0.9935
Epoch 7/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0184 - accuracy: 0.9941
Epoch 8/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0117 - accuracy: 0.9965
Epoch 9/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0120 - accuracy: 0.9962
Epoch 10/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0107 - accuracy: 0.9965

These two issues are related…

You didn’t actually fix that problem, merely masked it, which creates this problem.

If i dont use the following line:

if(logs.get(‘acc’) is not None

Then I get the error I was getting before:

TypeError: ‘>’ not supported between instances of ‘NoneType’ and 'float

`

I understand that. But you have to ask yourself why the lookup on acc is not found in the logs dictionary. Why not try “accuracy” in both places and see what happens? Since you don’t seem inclined to take my advice, maybe read this thread on stackoverflow, which asks the exact same question about the exact same code. Apparently some 6,000 people have already read it, you could be 6,001 !

@ai_curious

Your solution worked.TYSM and sorry I did not listen to you earlier.

Again thanks a lot

Checking that something isn’t Null or None before using it isn’t generally a bad idea. Unfortunately in this case it was a bandaid that covered up a wound that wouldn’t heal on its own. Glad it’s working now.