The evaluation of GAN includes fidelity and diversity.

The course spent all of time on how to measure fidelity.

But how is diversity measured ?

The evaluation of GAN includes fidelity and diversity.

The course spent all of time on how to measure fidelity.

But how is diversity measured ?

Hi @mc04xkf

a standard divergence measure in Machine Learning in general is the Kulback Leibler divergence, measuring how different / divergent two probability distributions are. You can use it to modify the GAN loss function, see also this source, where also other metrics are outlined. It you are interested in playing around, feel free to take a look at this repos, where useful metrics for divergence are covered:

- GitHub - lzhbrian/metrics: IS, FID score Pytorch and TF implementation, TF implementation is a wrapper of the official ones.
- GitHub - martingerlach/jensen-shannon-alpha-divergence: Generalized Jensen-Shannon divergence using alpha-entropies
- GitHub - dorianHe/wasserstein_distance: Wasserstein Distance or Earth Mover's Distance explanation and its application.

Best regards

Christian

1 Like

@Christian_Simonis Diversity and divergence are not the same, are they ?

Diversity means how many different kinds of images a GAR can produce, while divergence means opposite to convergence, meaning the loss is becoming more and more stable as training progresses.

Hi @mc04xkf

thanks for your comment!

You are right - in my previous post basically I included metrics for both, during training (like KL for loss function adjustment) but also for diversity analysis which you seem to be interested in. Here the first link I included in my previous post should be more interesting for you since e.g. IS can be used to capture **diversity**, using KL divergence, see also: https://www.coursera.org/lecture/build-better-generative-adversarial-networks-gans/inception-score-HxtYM

Also FID should be worth a look since it’s often more aligned with human judgement: https://de.coursera.org/lecture/build-better-generative-adversarial-networks-gans/frechet-inception-distance-fid-LY8WK

Also: my understanding is that diversity and divergence are related since a GAN with a reasonable diversity (sufficiently wide range of generated output) will also correspond to a low KL divergence to ground truth label distribution. (Bad diversity will also show a mismatch to ground truth labels).

Here is a paper on GANs I can recommend which you can take a look at if you are looking for more information: https://arxiv.org/pdf/1807.04720v1.pdf

Hope that helps!

Best regards

Christian

1 Like