Help with Art_Generation_with_Neural_Style_Transfer UNQ_C5

I’m really unsure of where I went wrong. I have restarted the kernal and ran from the beginning, and checked other postings. It appears that I’m using a tensor that cannot be converted to a numpy array.

NotImplementedError: in user code:

File "<ipython-input-127-4f32495065ab>", line 19, in train_step  *
    J_style = compute_style_cost(a_S, a_G)
File "<ipython-input-53-7b56545257a6>", line 30, in compute_style_cost  *
    J_style_layer = compute_layer_style_cost(a_S[i], a_G[i])
File "<ipython-input-47-b68791c415c8>", line 27, in compute_layer_style_cost  *
    J_style_layer = 1/(4*n_C**2*n_H**2*n_W**2)*tf.reduce_sum(np.square(GS-GG))

NotImplementedError: Cannot convert a symbolic tf.Tensor (sub:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.

he error you’re seeing suggests that you’re trying to pass a TensorFlow tensor object to a NumPy function that only works with NumPy arrays. Can you try to replace the np.square function in compute_layer_style_cost with the tf.square function, which computes the element-wise square of a tensor:

    J_style_layer = 1/(4*n_C**2*n_H**2*n_W**2)*tf.reduce_sum(tf.square(GS-GG))

Well, you can convert TF tensors to numpy arrays, but the point is that you need to explicitly specify that, as in:

myTensor.numpy()

But it doesn’t just happen implicitly. So you could have written that with np.square using the “explicit conversion” technique, but you’d have to convert it back to a tensor after the np.square. But what is the point when @JazzKaur has pointed out a much cleaner solution. :nerd_face:

Oh my gosh. Thank you! I looked back through my code 1000x, and I guess I used np.square out of habit, and just glossed over it every time thinking it was tf.square. Thank you!

This is tricky because the only way some of us got out of the compute_layer_style_cost was by changing tf.square to np.square. Going back to that function and changing it to tf square has an exception that comes up regarding: x and y must have the same dtype, got tf.float32 != tf.int32.
It’s like a sinking boat, you plug one hole and then a different one pops up.

It is deep water. We have to be careful with everything. Even 2. and 2 are not the same. Single dot can create a massive change.

BTW, if you are facing any error, feel free to share your error with us.

Best,
Saif.