Stop Discriminator weights update while update Generator

When you trained the discriminator we call .detach() to remove generator weights from the computation graph.

but we didn’t do the same while we trained the generator, I know generator weights will be updated in this case but also the discriminator weights will be updated since they are learnable and have enable_grad=True

Hi @DonFeto,
We can’t detach the discriminator when we compute the loss for the generator because the generator’s loss is calculated using the discriminator and we need to keep everything in that chain to properly calculate the generator’s results. It’s not an issue for the discriminator that we’ve calculated gradients for it because we only apply the gradients for the generator (by calling .step()), and we’re also doubly safe because we zero out the gradients every time before starting backprop (by calling zero_grad()).

For a more detailed discussion of this topic, take a look at Why should we detach the discriminators input ?!