Why we use torch.zeros_like and torch.one_like when calculating the loss

When calculating the loss functions we are using torch.zeros_like in fake discriminator loss and torch.ones_like in real discriminator loss. I just didn’t understand why we did this. Why we didn’t do the reverse? and also why in get_disc_loss function we detach the fake image ?

{moderator edit - solution code removed}

Those are two separate issues. The purpose of ones_like and zeros_like as opposed to torch.zeros or torch.ones is that the “like” functions also allocate the new tensor on the same device as the base tensor without any work on your part to figure out what the device assignment actually is.

The detach question is a completely different issue. Please see this thread for an explanation.

Thank you, I understood detaching, but my point on the first question was not what you answered. I know the difference between ones_like and ones, the actual question is this: When calculating the loss we use ground truths, however when we use ones_like and zeros_like truths become all 0 or 1. So shouldn’t we use the correct labels from the dataset? I hope you understand my point.

anyone wondering about the loss function of gans can read this. After reading this article I understood everything about the loss functions.

But the point is the only “labels” you have are the fact that the real images are real. The only training data you have is a set of real images. Then you know that anything produced by the generator is “fake” by definition, right? So you are defining the cost functions for each in the appropriate way: the generators goal is to confuse the discriminator into thinking fakes are real and the discriminator’s goal is not to be fooled in that way. It is the \hat{y} values produced by the discriminator that will be not exactly equal to 0 and 1, but we need to construct the ground truth 0 and 1 values from the definition of the only input data which is the real images.

Thank you, ı got the idea :slightly_smiling_face:

1 Like