Hi,
I’ve been working on training my VAE model using two input images: the indoor environment permittivity image and the indoor environment access point location image. I’ve successfully formulated the encoder, decoder, and latent space dimensions.
To compute the overall loss from the two input tensors, I’ve implemented the following steps:
# Calculate reconstruction loss for permittivity_input
reconstruction_loss_perm = tf.reduce_mean(tf.reduce_sum(tf.keras.losses.binary_crossentropy(permittivity_input, vae_output), axis=(1, 2)))
# Calculate reconstruction loss for APloc_input
reconstruction_loss_APloc = tf.reduce_mean(tf.reduce_sum(tf.keras.losses.binary_crossentropy(APloc_input, vae_output), axis=(1, 2)))
# Compute KL divergence loss
kl_loss = -0.5 * tf.reduce_mean(tf.reduce_sum(1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var), axis=1))
# Total VAE loss
vae_loss = reconstruction_loss_perm + reconstruction_loss_APloc + kl_loss
# Add the total VAE loss to the model
vae.add_loss(vae_loss)
However, I’ve encountered the following error: TypeError: unhashable type: ‘DictWrapper’ at the line where I add the loss to the VAE model → (vae.add_loss(vae_loss)).
I would greatly appreciate any suggestions or insights on resolving this issue.
Thank you,
Rahul