For compute_layer_style_cost,

a_S tensor of dimension (1, n_H, n_W, n_C), why we don’t need to consider the dimension 1 when we do the reshape using a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C])).

why a_S = tf.transpose(tf.reshape( a_S, shape= [1,n_H*n_W, n_C]),perm=[0,2,1]) does not work? Isn’t the reshape tensor supposed to have 3 dimensions?

But for compute_content_cost, a_C_unrolled = tf.reshape(a_C, shape=[m, -1, n_C]) works well, what is the difference between those two?

1 Like

Hi Yiming,

Imagine a 3D volume. If one axis of the volume has size 1, then basically we have a 2D volume with an additional dimension (of size 1). Dropping the latter dimension does not lead to a loss of information and leads to a 2D volume that can be used to calculate the gram matrix.

I hope this answers your question.

1 Like