Calculating loss with Multiple Input Single Output scenario

Hi,

I’ve been working on training my VAE model using two input images: the indoor environment permittivity image and the indoor environment access point location image. I’ve successfully formulated the encoder, decoder, and latent space dimensions.

To compute the overall loss from the two input tensors, I’ve implemented the following steps:

# Calculate reconstruction loss for permittivity_input
reconstruction_loss_perm = tf.reduce_mean(tf.reduce_sum(tf.keras.losses.binary_crossentropy(permittivity_input, vae_output), axis=(1, 2)))

# Calculate reconstruction loss for APloc_input
reconstruction_loss_APloc = tf.reduce_mean(tf.reduce_sum(tf.keras.losses.binary_crossentropy(APloc_input, vae_output), axis=(1, 2)))

# Compute KL divergence loss
kl_loss = -0.5 * tf.reduce_mean(tf.reduce_sum(1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var), axis=1))

# Total VAE loss
vae_loss = reconstruction_loss_perm + reconstruction_loss_APloc + kl_loss

# Add the total VAE loss to the model
vae.add_loss(vae_loss)

However, I’ve encountered the following error: TypeError: unhashable type: ‘DictWrapper’ at the line where I add the loss to the VAE model → (vae.add_loss(vae_loss)).

I would greatly appreciate any suggestions or insights on resolving this issue.

Thank you,
Rahul

Hello @Rahul_Gulia

Please refer this on why you are getting the type error

https://stackoverflow.com/questions/13264511/typeerror-unhashable-type-dict

Let me know if you still didn’t get on how to correct it.

Regards
DP

I’m sharing this update for the benefit of others who may encounter a similar issue in their work.

After some troubleshooting, I managed to resolve the issue. It turned out that the reconstruction loss was functioning correctly, but adjustments were needed during the training phase. Specifically, I had to reshape the training and testing datasets as demonstrated below:

X_train_reshaped = X_train.reshape(-1, 120, 160, 1)
X_test_reshaped = X_test.reshape(-1, 120, 160, 1)

batch_size = 120
epochs = 10

# In Autoencoder we will fit the training data to itself. 
# Train the model
history = vae.fit(
    [X_train_reshaped, X_train_reshaped],  # Input data
    epochs=epochs,                          # Number of epochs
    batch_size=batch_size,                  # Batch size
    validation_data=([X_test_reshaped, X_test_reshaped], None)  # Validation data
)

I hope this clarification helps others facing a similar challenge.

Thank you,
Rahul