Course4 week4 train_step error

   hi, I get an error for the train_step exersice6! could you tell me where is the problem?

That code looks correct to me. What error did you get?

One other thing to point out: did you read the comment that is the first line in the test cell that comes right after that function cell? Here it is:

# You always must run the last cell before this one. You will get an error if not.

In other words, if you run the function definition once, but run the test cell twice, the second execution of the test cell will fail spectacularly. But if you then rerun the function cell, it works the next time. Reading the error messages that you get in the failing case, this is apparently how “tf decorator” functions work: they can be stateful, so you have to careful whether you need to create them from scratch each time or whether you are using it in its current state. Mind you, I don’t think the course materials yet do an adequate job of explaining this concept (as you can tell from my barely coherent comments here :scream_cat:).

Also would you please do us a favor and remove the solution source code from your post? We don’t want to leave solution code published in the forums. Thanks!

thank you. but i still get this error:
also i deleted my code :wink:

NotImplementedError Traceback (most recent call last)
in
2 generated_image = tf.Variable(tf.image.convert_image_dtype(content_image, tf.float32))
3
----> 4 J1 = train_step(generated_image)
5 print(J1)
6 assert type(J1) == EagerTensor, f"Wrong type {type(J1)} != {EagerTensor}"

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in call(self, *args, **kwds)
778 else:
779 compiler = “nonXla”
→ 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
821 # This is the first call of call, so we have to initialize.
822 initializers =
→ 823 self._initialize(args, kwds, add_initializers_to=initializers)
824 finally:
825 # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
695 self._concrete_stateful_fn = (
696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
→ 697 *args, **kwds))
698
699 def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2853 args, kwargs = None, None
2854 with self._lock:
→ 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2856 return graph_function
2857

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
→ 3213 graph_function = self._create_graph_function(args, kwargs)
3214 self._function_cache.primary[cache_key] = graph_function
3215 return graph_function, args, kwargs

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3073 arg_names=arg_names,
3074 override_flat_arg_shapes=override_flat_arg_shapes,
→ 3075 capture_by_value=self._capture_by_value),
3076 self._function_attributes,
3077 function_spec=self.function_spec,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
→ 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: func_outputs contains only Tensors, CompositeTensors,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
598 # wrapped allows AutoGraph to swap in a converted function. We give
599 # the function a weak reference to itself to avoid a reference cycle.
→ 600 return weak_wrapped_fn().wrapped(*args, **kwds)
601 weak_wrapped_fn = weakref.ref(wrapped_fn)
602

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, “ag_error_metadata”):
→ 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise

NotImplementedError: in user code:

<ipython-input-27-1e41e4cf1731>:19 train_step  *
    J_style = compute_style_cost(a_S, a_G)
<ipython-input-15-74c43cad8f37>:28 compute_style_cost  *
    J_style_layer = compute_layer_style_cost(a_S[i], a_G[i])
<ipython-input-11-08c0a76b3f91>:28 compute_layer_style_cost  *
    J_style_layer = factor * tf.reduce_sum(np.power(GS - GG, 2))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:848 __array__  **
    " a NumPy call, which is not supported".format(self.name))

NotImplementedError: Cannot convert a symbolic Tensor (sub:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

Very interesting! Thanks for the actual error message (and for deleting the code). It turns out we can see enough of the code just from the error trace to figure out what is happening:

The mistake is in your earlier compute_layer_style_cost function. Try using tf.square instead of np.power(…,2) to compute the square of the difference tensor. For some reason, the code you wrote passes the unit test for compute_layer_style_cost, but then things explode and catch fire when we run the train_step. My guess is that this is the first point where the gradients get computed and that’s what makes the earlier error matter. Just on general principles, it’s always safer to use tf primitives when the objects you are dealing with are tensors. The way the computation of gradients works is that it is done by the TF gradient code and it only works if the complete compute graph is composed only of TF functions. Using a numpy function anywhere in the middle breaks the gradient computations because numpy does not support automatic gradients.

Also just to be clear, my belief is that it’s perfectly ok to show exception traces here on the forums. Even though it reveals a few snippets of code, it’s not a complete solution. And if we don’t even have that, debugging is pretty hopeless. :nerd_face: :scream_cat:

Also BTW I consider it a bug in the test case in the notebook that it accepts the np.power solution. I’ll file a bug about that with the course staff. Thanks for helping us debug the new version of the course!

3 Likes

Thank you so much. You were right. The error got fixed.

1 Like

That’s good news! Thanks for confirming.