[C1_W2_Assignment] Discriminator training code: Why don't use "fake.requires_grad_ = False" instead of "fake.detach()"

@TRAN_KHANH1, to add to @gautamaltman’s comments:

I can think of one reason you might want to choose to use detach(). Since detach() creates a new tensor, if you use fake.detach() for the discriminator, you could theoretically still use the same fake with your generator where you want the gradients. I think some of the assignments take advantage of this.

I suspect the course developers used detach() in all assignments for consistency to help students focus on the main concepts, but you’re absolutely right that as long as you don’t need/want to reuse fake, the requires_grad_ approach is more efficient.