Week 1 Assignment 2 Raise Error because of TensorFlow Version

I’m Stuck in the second assignment of week 1 of course 4. In convolutional_model function, I got in this error msg

OperatorNotAllowedInGraphError            Traceback (most recent call last)
<ipython-input-14-f1284300b767> in <module>
----> 1 conv_model = convolutional_model((64, 64, 3))
      2 conv_model.compile(optimizer='adam',
      3                   loss='categorical_crossentropy',
      4                   metrics=['accuracy'])
      5 conv_model.summary()

<ipython-input-13-66f3c870470b> in convolutional_model(input_shape)
     37     # YOUR CODE STARTS HERE
     38     Z1 = tf.keras.layers.Conv2D(filters=8, kernel_size=4, strides=1, padding='same')(input_img)
---> 39     A1 = tf.keras.layers.ReLU(Z1)
     40     P1 = tf.keras.layers.MaxPool2D(A1, pool_size=8, strides=8, padding='same')
     41     Z2 = tf.keras.layers.Conv2D(P1, filters=16, kernel_size=2, strides=1, padding='same')

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/advanced_activations.py in __init__(self, max_value, negative_slope, threshold, **kwargs)
    344   def __init__(self, max_value=None, negative_slope=0, threshold=0, **kwargs):
    345     super(ReLU, self).__init__(**kwargs)
--> 346     if max_value is not None and max_value < 0.:
    347       raise ValueError('max_value of Relu layer '
    348                        'cannot be negative value: ' + str(max_value))

/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in __bool__(self)
    875       `TypeError`.
    876     """
--> 877     self._disallow_bool_casting()
    878 
    879   def __nonzero__(self):

/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _disallow_bool_casting(self)
    488     else:
    489       # Default: V1-style Graph execution.
--> 490       self._disallow_in_graph_mode("using a `tf.Tensor` as a Python `bool`")
    491 
    492   def _disallow_iteration(self):

/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _disallow_in_graph_mode(self, task)
    477     raise errors.OperatorNotAllowedInGraphError(
    478         "{} is not allowed in Graph execution. Use Eager execution or decorate"
--> 479         " this function with @tf.function.".format(task))
    480 
    481   def _disallow_bool_casting(self):

OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

I googled this error and found out that this because of the upgrade version of TensorFlow to 2.0x. that requires you to assign the compile the model in this way

conv_model.compile(optimizer='adam'
                  loss='categorical_crossentropy', 
                  metrics=[tf.keras.metrics.Accuracy()])

not as typed in the assignment notebook

conv_model.compile(optimizer='adam',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])

I tried to change the test cell but it’s locked as usual.
This is a reference for what I have found

Would you please help?!

Hi @mahmoud_ahc,

If you do not have the space to upgrade to tf2.x, you can use Google colab. They allow you to use tf2.x versions. That is one option you can try. Else you might wish to use tf2.x. The new tensorflow version is better than the previous one in many ways as well!

Hope this helps!

You didn’t get my point. The test will fail because it’s written based on old TF version, as I understand from similar issues.

I’m not convinced this is true. See for example the following excerpt:

Note that if you’re satisfied with the default settings, in many cases the optimizer, loss, and metrics can be specified via string identifiers as a shortcut:

model.compile(optimizer="rmsprop",loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"],)

From here

You can find another thread about this same issue here. Please have a look. It has nothing to do with versions of TF. Normally this error just means you have misinterpreted the way that the “Functional API” works and are not supplying a tensor as the argument on the RHS, which results in the LHS becoming a function instead of a tensor. Things go south from there …

2 Likes

Attached is another counter example to the conclusion about the error being TF version and metrics names, found in the latest Keras docs here:

1 Like