Hi,

The Generative loss for WGAN is given as

G.L = - torch.mean(fake_pred)

however we have the loss function excluding GP

min(g) max(d) = E(c(x)) - E(c(g(z)))

Is this because E(c(x)) = 0, as we are evaluating fake images ?

Thank you

Hi,

The Generative loss for WGAN is given as

G.L = - torch.mean(fake_pred)

however we have the loss function excluding GP

min(g) max(d) = E(c(x)) - E(c(g(z)))

Is this because E(c(x)) = 0, as we are evaluating fake images ?

Thank you

1 Like

Hi @Shahid_Ahmed,

Good question! This is just a simplification. For me, itâ€™s easiest to think of this from the intuitive perspective. The generator wants to create images that fool the discriminator. The higher the prediction from the discriminator, the more confident the discriminator is that the image is real, which is exactly what the generator wants.

Even though technically, the minimax equation is E(c(x)) - E(c(g(z))), where E(c(x)) is the discriminatorâ€™s prediction for real images, for the generator loss, it doesnâ€™t really matter what that value is. Whatever it is, the overall calculation will be minimized when -E(c(g(z)) is minimized (i.e. when the discriminator predicts the largest score for the fake images). The only thing the generator controls is the fake images it creates, so the only thing it can improve is to create something the discriminator will think is more real.

On the other hand, the discriminatorâ€™s goal is to give high predictions for real images and low predictions for fake images. In other words, itâ€™s important for the discriminator to increase the separation between those two predictions, which it can do by improving (increasing) its predictions for the real images or decreasing its predictions for the fake images.

1 Like

Hi Wendy,

Thank you for your reply, it was a really good explanation and i agree with you that thinking about it intuitively makes more sense,

Also just adding for clarity from the above discussion

we are trying to maximize E(c(g(z)), however we are negating this as -E(c(g(z)) when computing Generator loss so that when we find the local minima via gradient decent we actually move towards maximizing E(c(g(z)).

Thank you.

1 Like

I donâ€™t understand very well, does this mean that the generator wants to minimize the predicted value of the discriminant, so the generator puts -1 before the expression E(c(g(z)))?

@zahra_hematy,

The generator wants to *maximize* the value the discriminator predicts for fake images. The larger the prediction, the more real the discriminator thinks the image is, which is what the generator wants. But when we talk about loss, a *smaller* loss is better, which means the larger the discriminatorâ€™s prediction, the smaller we want the loss to be. Thatâ€™s why we take the negative.