Anime faces: {{function_node __wrapped__SquaredDifference_device_/job:localhost/replica:0/task:0/device:GPU:0}} required broadcastable shapes [Op:SquaredDifference] name:

{codes removed by moderator, posting grader cell codes is against community guidelines}

when train the model, i got error like this

how to fix this?

1 Like

can I know what is this reshaped_reconstructed?

You were suppose to add the code you wrote for the below code instruction
feed a batch to the VAE model
which would be vae of x_batch_train

Also heads, be careful about note sharing any of the grader cell codes, it is against community guidelines, in case the codes comes in error log that should be fine, but you do not post codes from any of the grade cell.


I’m sorry about this, previously I had tried to find the cause of the error, it said reconstructed and x_batch_shape had different shapes. I tried reshaping reconstructed but the error still occurs.
So I’m still looking for a way to solve the error in the photo above, do you have any ideas?

1 Like

So @anelieshrtmn

Your issue lies here, can you share screenshot of that error??

still the same as this, sir.

1 Like

Okay thank you for sharing the image,

your issue lies how you recalled the epsilon

Remember your inputs are basically mu and sigma

here you need to define the batch and dim using tf.shape for either mu or sigma (only either of the one)

Also batch would be mu or sigma of [0] and dim would mu or sigma of [1]

then epsilon calculate using tf.keras.backend.random_normal with a shape that includes(batch, dim)

then the z is calculated using mu + tf.exp(0.5 * sigma) * epsilon

Also make sure you go through my previous comment for the loss equation where reconstructed is basically vae of x_batch_train and no reshape.

Let me know if you still encountering any error.


Thank you for your reply.
I have recalled epsilon according to your instructions, but I still get an Incompatible shapes error like this

1 Like

Send me screenshot of the codes you corrected via personal DM. Don’t post codes here.

I’ve sent the codes via DM. Please check it


Hi @anelieshrtmn

kindly remove the code line

loss += sum(vae.losses)


how about the step on the notebook:
“add the KLD regularization loss to the total loss (you can access the losses property of the vae model)”

1 Like

Did you remove the code I mentioned and then run the cell?

That loss was suppose to be recall as just loss but you have recalled it has kl_loss

yes, I have removed the code you mentioned. but it still produces the same error, because the error occurred in the previous code which calculated the loss using mse_loss.

1 Like

Hi @anelieshrtmn

in the def decoder_model(latent_dim, convolutional_shape)

inputs = tf.keras.layers.Input(shape=(latent_dim,) WHYTHIS , WAS USED WITH LATENT_DIM?? REMOVE THAT COMMA

model = tf.keras.models.Model(inputs, outputs, name=“decoder”) (KINDLY REMOVE models from this code line

in the def vae model, place your loss(kl_loss) code after model recall for inputs and reconstructed, after which you add the loss to the model (also change the kl_loss to loss)

Also kindly run the cells from beginning one by one after you make the corrections.

let me know if the issue still persists.


it’s solved, there is a lack of layers in my encoder model. Thank you Deepti for helping.

1 Like