Conditional GAN vs K Unconditional GANS?

What are the conceptual benefits from training a conditional GAN with K classes versus training K unconditional GANS, one for each class?

For example, I could train a conditional GAN on the handwritten digits (i.e., 10 classes) Alternatively, I could train 10 unconditional GANs for each digit. Does the conditional GAN use information from one digit to help model the distribution of other digits? If not and each class distribution is estimated independently, then is the only benefit of conditional GAN that it is less cumbersome (i.e., 1 model vs 10 models)?

If you were to train an unconditional GAN for each of the classes, you would end up with X separate models for interpreting each class. We normally utilize GANs to sample from a distribution of samples to learn the model to become better and better in creating and discriminating against them.

Great question :slight_smile: