Conditional GAN Training Part (Curiosity)

Oh! Good point, @Elemento! I was not thinking straight. Since detach() returns a copy of the tensor, it would only be a problem if the generator reused the same copy as the discriminator used, which is not the case in the earlier assignments. So, my theory about why those assignments didn’t use the same fake images for the generator as it used for the discriminator is wrong. Maybe they just did it to help emphasize that the generator is being trained separately from the discriminator.

In any case, there is nothing special about conditional GANs as far as this approach is concerned. If you take the later courses in the GAN specialization, you’ll see other exercises where the generator uses the same fake images as the discriminator. (It is, of course, important to use a different set of fake images each time we loop through and call the generator again. The main thing is that the generator is trained against a range of images.).

If you’re curious, you could try an experiment and go back and try one of the old exercises using the fake_noise instead of fake_noise_2 for the generator and check that your results are similar quality to what you get using fake_noise_2.