Why does the function for KL Reconstruction Loss in the Variational Autoencoder Lab contain input args `inputs` and `outputs` but not use them?

Why does the function for KL Reconstruction Loss in the Variational Autoencoder Lab contain input args inputs and outputs when it doesn’t use them?

def kl_reconstruction_loss(inputs, outputs, mu, sigma):
  """ Computes the Kullback-Leibler Divergence (KLD)
  Args:
    inputs -- batch from the dataset
    outputs -- output of the Sampling layer
    mu -- mean
    sigma -- standard deviation

  Returns:
    KLD loss
  """
  kl_loss = 1 + sigma - tf.square(mu) - tf.math.exp(sigma)
  kl_loss = tf.reduce_mean(kl_loss) * -0.5

  return kl_loss

Is this some sort of Tensorflow requirement for loss functions?

1 Like

Good question, @Steven1!

If you were writing a loss function to be used by model.compile(), then the loss function would have to have the signature that keras expects: loss_fn(y_true, y_pred), but kl_reconstruction_loss() is just being called by our code to get a loss value that is then passed to model.add_loss(), so I see no reason that we should need those extra parameters.

I suspect those parameters are just left-over from an early implementation. I’ll let the course staff know so they can clean it up - or explain the reason for them if I’m missing something.

2 Likes

I tried to delete inputs and outputs from kl_reconstruction_loss and in the definition of vae_model change:
loss = kl_reconstruction_loss(inputs, z, mu, sigma)
to:
loss = kl_reconstruction_loss(mu, sigma)

and the VAE model still train just fine.