Ideally, the generator loss must decrease over time. But it has increased from !1.4 to ~4.15
Can you please explain why?
It is possible in the beginning the generator loss will increase because in the beginning the discriminator is much better in terms of performance.
Thank you for your answer @gent.spah
But if you check the final generator loss when the model got balanced and it was able to generate real looking data, the generator loss became ~4.15 (it didn’t reduce later on). Can you tell me why?
Hi Rucha! Welcome to the community. Hope you are doing well.
This is the most interesting part of GANs.
The generator’s loss depends on the discriminator’s output. Initially both perform poorly so the discriminator might not even be able to classify a poorly generated image as fake, so the generator loss might be less when compared to the middle stages and then slowly discriminator catches up and it outperforms the generator quickly (You can see that when the generator’s loss is ~4, the discriminator’s loss will be very close to 0) this is so because whatever the generator generates, the discriminator classifies it as fake which in turn increases the generator’s loss even though it is producing more realistic images than initial stages.
But soon the generator too, picks up and the loss goes down unlike what you have said – it reduces
My results from that assignment :
Step 1000: Generator loss: 1.7552806265354173, discriminator loss: 0.2786566876769066
Step 1500: Generator loss: 2.056151607751848, discriminator loss: 0.15896223971247678
Step 7500: Generator loss: 4.523370460510255, discriminator loss: 0.04648638882115483
Step 77500: Generator loss: 1.5350538625717158, discriminator loss: 0.39222454440593674
Step 85500: Generator loss: 1.2603452231883994, discriminator loss: 0.49649087184667595
Step 90000: Generator loss: 1.3605687420368198, discriminator loss: 0.4443725310564038
Step 93500: Generator loss: 1.2486372106075276, discriminator loss: 0.4813919951915738
They keep fighting till eternity
But as said, there is more instability in training. As you progress through the course you find better loss functions than BCE which will address some of these problems.
Regards,
Nithin
Thank you, I get it.
I realized I should have taken more epochs to test this.
Regards,
Rucha