I’m using the same code structure for writing a callback function as provided in example code during Week 2 videos. However, when attempting to compile it with 0.99 instead of 0.6 as the stop percentage, I get a compilation error implying that I’m not able to use the same operator (>=). Anyone else seeing something similar when trying to complete exercise #2 in Week 2?
Hi Cristopher, can you add your callback’s code?
Maurizio
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get(‘accuracy’) >= 0.99): # Experiment with changing this value
print("\nReached 60% accuracy so cancelling training!")
self.model.stop_training = True
I’m really hoping that this is a silly typo that my eyes aren’t seeing.
More specifically, this is the series of warnings and errors that I can see:
WARNING: Logging before flag parsing goes to stderr.
W0909 22:09:16.373210 140624388753216 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Epoch 1/10
59584/60000 [============================>.] - ETA: 0s - loss: 0.1994 - acc: 0.9410
TypeError Traceback (most recent call last)
in
----> 1 train_mnist()
in train_mnist()
30
31 # model fitting
—> 32 history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
33 # model fitting
34 return history.epoch, history.history[‘acc’][-1]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
778 validation_steps=validation_steps,
779 validation_freq=validation_freq,
→ 780 steps_name=‘steps_per_epoch’)
781
782 def evaluate(self,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
417 if mode == ModeKeys.TRAIN:
418 # Epochs only apply to fit
.
→ 419 callbacks.on_epoch_end(epoch, epoch_logs)
420 progbar.on_epoch_end(epoch, epoch_logs)
421
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
309 logs = logs or {}
310 for callback in self.callbacks:
→ 311 callback.on_epoch_end(epoch, logs)
312
313 def on_train_batch_begin(self, batch, logs=None):
in on_epoch_end(self, epoch, logs)
6 class myCallback(tf.keras.callbacks.Callback):
7 def on_epoch_end(self, epoch, logs={}):
----> 8 if(logs.get(‘accuracy’) >= 0.99): # Experiment with changing this value
9 print("\nReached 60% accuracy so cancelling training!")
10 self.model.stop_training = True
TypeError: ‘>=’ not supported between instances of ‘NoneType’ and ‘float’
You are comparing a NoneType and a float (0.99). You should check that logs.get(‘accuracy’) is not None in your if instruction.
Hope it helps,
Maurizio
Hi Cristopher,
As Maurizio says, that’s because getting ‘accuracy’ returns a None value. To avoid the runtime error, you can check that this instruction is not returning None before comparing to 0.99.
Having said that, this is not going to help you that much to stop training when reaching that condition. If you noticed, your log reads:
Epoch 1/10
59584/60000 [============================>.] - ETA: 0s - loss: 0.1994 - acc: 0.9410
i.e. metrics are ‘loss’ and ‘acc’. So, I believe that you can change ‘accuracy’ per ‘acc’ in the callback and should work straight ahead.
Best
What is the difference between ‘accuracy’ and ‘acc’ ? aren’t they one and the same ?
They are the same. I believe that’s name ‘accuracy’ was introduced in Keras later.
Thanks! Will try that out later today (the course seems to be down for maintenance right now).
It is noteworthy that you have to change some of the code that we are instructed not to change to get this to work (there’s another use of ‘accuracy’ that really needed to be replaced with ‘acc’ in addition to the instances that I added to my code).