Exercise 3 - compute_layer_style_cost: Don't use the numpy API inside compute_layer_style_cost

Hello to anyone who is reading this, I am facing the following error.

Don’t use the numpy API inside compute_layer_style_cost
Failed to convert object of type <class ‘list’> to Tensor. Contents: [None, 16, 8]. Consider casting elements to a supported type.

This error corresponds to the following code:

UNQ_C3

GRADED FUNCTION: compute_layer_style_cost

def compute_layer_style_cost(a_S, a_G):
“”"
Arguments:
a_S – tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G – tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G

Returns: 
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE

# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images from (n_H * n_W, n_C) to have them of shape (n_C, n_H * n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, shape = [m, n_H * n_W, n_C]), perm = [0,2,1]) # '0' is the batch, so, you transpose the other two dimensions
a_G = tf.transpose(tf.reshape(a_G, shape = [m, n_H * n_W, n_C]), perm = [0,2,1])

# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)

# Computing the loss (≈1 line)
J_style_layer = (1/(4*(n_C**2)*((n_H*n_W)**2))) * tf.math.reduce_sum(tf.math.square(tf.subtract(GS, GG)), axis=None, keepdims=False)

### END CODE HERE

return J_style_layer

I review the code and I think everything (including the dimensions) is ok, I have no idea where the error could be, I read another post about this same topic but it wasn’t too explicit, so, my code is here, feel free to remove the code once anyone can give me the possible solution.
thanks in advance to the person who is reading this right now and trying to figure out my error.

##########

Update

##########


I submitted the exercise and I got 100/100, so, I don’t know what the meaning of the error is.

Insert a new cell below your function and run:

ll = tf.keras.layers.Dense(8, activation='relu', input_shape=(4, 4, 3))
model_tmp = tf.keras.models.Sequential()
model_tmp.add(ll)
compute_layer_style_cost(ll.output, ll.output)

Report back what you get.

I got this error

TypeError: Failed to convert object of type <class ‘list’> to Tensor. Contents: [None, 16, 8]. Consider casting elements to a supported type.

#########
##Edited ##
#########
I change the line
J_style_layer = (1/(4*(n_C**2)((n_Hn_W)**2))) * tf.math.reduce_sum(tf.math.square(tf.subtract(GS, GG)), axis=None, keepdims=False)

for this one

J_style_layer = tf.math.multiply((1/(4*(n_C**2)*((n_H * n_W)**2))), tf.math.reduce_sum(tf.math.square(tf.subtract(GS, GG)), axis=None, keepdims=False))

because I wanted to be sure that return a tensor object, and even that, the error is still the same.
###########

Updated

###########
checking the output of the runtime compiler I noticed this:

—> 18 a_S = tf.transpose(tf.reshape(a_S, shape = [m, n_H * n_W, n_C]), perm = [0,2,1]) # ‘0’ is the batch, so, you transpose the other two dimensions

the error should be in this line, so, the thing is that I only used tf functions, I have no idea why I am facing this error

I think you have an extra dimension in a_S and a_G. Dont reshape using m (it is always one).

But the dimension is there, I have to consider when I use the reshape and the transpose function, because when I use the function print(a_S.get_shape().as_list()), I see this output [None, 4, 4, 8], so I think I should use the function:
a_S = tf.transpose(tf.reshape(a_S, shape = [m, n_H * n_W, n_C]), perm = [0,2,1]) # ‘0’ is the batch, so, you transpose the other two dimensions

because I want to maintain the initial position (batch) and I have to consider m when I use the reshape function and the transpose function when I want to and change the dimensions of the matrix. I don’t know another way to do this, I removed the ‘m’ variable and I got an error.

###########

UPDATE

###########
instead of use m used 1, and the function is as follow:
a_S = tf.transpose(tf.reshape(a_S, shape = [1, n_H * n_W, n_C]), perm = [0,2,1])

I got no error. The only I want to know is why this happening, because I want to know how to solve this kind of issues in the future.

The test

doesn’t work if you use m in your function, because ll.output is a so called symbolic tensor. The comment before the test also reads “Test that it works with symbolic tensors”. I don’t know why we have a test for this. It makes more sense to test with specific values as we do in the first test:

tf.random.set_seed(1)
a_S = tf.random.normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random.normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer_GG = compute_layer_style_cost(a_G, a_G)
J_style_layer_SG = compute_layer_style_cost(a_S, a_G)

You will most likely never run into this issue again or in real life.

Out of curiosity, did you get the same error for compute_content_cost?

I am getting similar error and hence tried your suggestion.