Course 4 Week4 - Exercise 3 - compute_layer_style_cost¶/// TypeError: x and y must have the same dtype, got tf.float32 != tf.int32

Does anyone can help me to understand why I got this error and how can I solve it, please.

ValueError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in binary_op_wrapper(x, y)
1135 r_op = getattr(y, “r%s” % op_name)
→ 1136 out = r_op(x)
1137 if out == NotImplemented:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in r_binary_op_wrapper(y, x)
1154 with ops.name_scope(None, op_name, [x, y]) as name:
→ 1155 x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name=“x”)
1156 return func(x, y, name=name)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1474 "Tensor conversion requested dtype %s for Tensor with dtype %s: r"
→ 1475 (,, value))
1476 return value

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: <tf.Tensor: shape=(), dtype=float32, numpy=0.0>

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last)
2 a_S = tf.random.normal([1, 4, 4, 3], mean=1, stddev=4)
3 a_G = tf.random.normal([1, 4, 4, 3], mean=1, stddev=4)
----> 4 J_style_layer_GG = compute_layer_style_cost(a_G, a_G)
5 J_style_layer_SG = compute_layer_style_cost(a_S, a_G)

in compute_layer_style_cost(a_S, a_G)
26 # Computing the loss (≈1 line)
—> 27 J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) / (4 * tf.square(n_C) * tf.square(n_H * n_W))
28 #J_style_layer = None

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in binary_op_wrapper(x, y)
1139 return out
1140 except (TypeError, ValueError):
→ 1141 raise e
1142 else:
1143 raise

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in binary_op_wrapper(x, y)
1123 with ops.name_scope(None, op_name, [x, y]) as name:
1124 try:
→ 1125 return func(x, y, name=name)
1126 except (TypeError, ValueError) as e:
1127 # Even if dispatching the op failed, the RHS may be a tensor aware

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/ in wrapper(*args, **kwargs)
199 “”“Call target, and fall back on dispatchers if there is a TypeError.”“”
200 try:
→ 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in truediv(x, y, name)
1295 TypeError: If x and y have different dtypes.
1296 “”"
→ 1297 return _truediv_python3(x, y, name)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ in _truediv_python3(x, y, name)
1226 if x_dtype != y_dtype:
1227 raise TypeError("x and y must have the same dtype, got %r != r"
→ 1228 (x_dtype, y_dtype))
1229 try:
1230 dtype = _TRUEDIV_TABLE[x_dtype]

TypeError: x and y must have the same dtype, got tf.float32 != tf.int32


I think it’s a mistake to use tf.square with the constant integer terms there (n_C, n_W and n_H). Try using np.square and using floating point constants like 2. and 4. instead of 2 and 4. The problem is caused by TF being much more strict about integer types, whereas base python and numpy are a bit more forgiving.

1 Like

Thanks for your help.

It did work, but now I am getting a wrong value error.
How could I figure out what is the problem?

AssertionError Traceback (most recent call last)
9 assert np.isclose(J_style_layer_GG, 0.0), “Wrong value. compute_layer_style_cost(A, A) must be 0”
10 assert J_style_layer_SG > 0, “Wrong value. compute_layer_style_cost(A, B) must be greater than 0 if A != B”
—> 11 assert np.isclose(J_style_layer_SG, 14.017805), “Wrong value.”
13 print("J_style_layer = " + str(J_style_layer_SG))

AssertionError: Wrong value.

1 Like

Your code computes the wrong value for J_style_layer. There is an error in your mathematics.

One common error that causes bad numeric results is not doing the “reshape” step correctly. You need both a transpose and a reshape there. If you directly reshape to the shape you want without the transpose, it ends up scrambling the data.

I made the mistake I am describing above and also added print statements to the test cell to show both values. Here’s what I get with the incorrect code:

J_style_layer_GG 0.0
J_style_layer_SG 2.9203946590423584

WIth the correct code, I see this:

J_style_layer_GG 0.0
J_style_layer_SG 14.017805099487305

Of course I’m sure there are many other mistakes one could make than the specific “reshape” issue, but if your SG cost value matches the incorrect one I show then that’s a pretty good clue where to look for the mistake.


Thank you all for your help @TMosh

You were completely right and the problem was with the reshape and transpose
I am sure that there is a way to do it in one line of code, however, I separate them into two lines.
First, perform the reshape and turn the tensor into a 2 dim tensor of shape (n_H*n_W, n_C)
and then using the transpose function changes the positions of the rows and the columns.

Thank you very much

1 Like

It’s great to hear that you found the solution. If you want to understand more about why the “direct reshape” gives the wrong answer, here’s a thread with more info which also points at this one which has an actual example of the damage of doing it incorrectly.

I think here is the place I discovered how to get through compute_layer_style_cost was by switching to np.square since tf has the strict integer types.

Or you could just add the little “dot” to make 4 into 4. which is a float, right? Here’s a thread with more graphic examples.

1 Like