Conceptual Question: Why do we interpolate the real and generated images?

(I apologize again for all these questions. This week was particularly confusing for me.)

I know this is explained in the video, but I don’t understand the explanation.

  1. Is the feature space just the set of all the variables? If so, why would it be impossible or impractical to check all of the critic’s gradients, when we’re able to calculate them earlier?

  2. How exactly does an interpolation help? Wouldn’t the amount of variables, and therefore the amount of gradients to check still be the same?

@Harvey_Wang ,

  1. In GANs, we interpolate real and generated images to see how the discriminator reacts to images that are in between real and generated. This helps us to understand how the discriminator distinguishes the images, and it can also help us to identify areas where the discriminator is struggling.
    The feature spaces in GANs are the set of all possible values that the latent variables can take on. But, it is not possible or impractical to check all of the critic’s gradients because the feature space is typically very large, which means, checking the gradients of all pairs would be computationally expensive and time-consuming.
  2. Part of this question has already been answered above. In addition, interpolation can help us to understand how the discriminator is learning to distinguish between real and generated images.
    The amount of variables in feature space does not change when we interpolate between real and generated images. However, the amount of gradients that we need to check does decrease, because we only need to check the gradients of the pairs of real and generated images that we interpolate between.
1 Like