Why should we detach the discriminators input ?!

Thanks for your explanation, but there is a thing that I didn’t get. you said that when we train the discriminator we detach the generator but not the other way around. however, when we want to train the generator we should go through the discriminator and then we will face the discriminators input which had been detached.

  1. we start from gen.loss.backward()
  2. then criterion(disc_fake_pred, torch.ones_like(disc_fake_pred)
  3. then we go to disc(fake_image_and_labels)
  4. then torch.cat((fake.float().detach(), image_one_hot_labels.float()), 1)
    the ones that are bold are those that matter in backpropogation process