Bad output of gan for hand image generation

hi guys:
i hope i am in suitable place to ask.
i have built the and discriminator with recommended architecture but after 60 epochs results was not satisfying me.
can any one tell me reasons or recommend a configuration for it?
thanks in advance.

HI @Khaled_Mohammed,

It’s a difficult assignment, I remember that i need multiple submissions to pass it. :slight_smile:

But I used the recommended architecture, and my hands doesn’t look like “normal” hands to me, But some of them were good enough to fool the neural network.

There are some hacks to implement if you want to achieve, or try to, a better result.
This is a goo start point:

But if you want, try to change the ‘selu’ activation in the generator by a ‘relu’. No idea if it will produce better results, but it’s a common GAN Hack.

Hope it helps :slight_smile:


Do you remember how many epochs you used for this assignment?

Hi @Deepti_Prasad ,

20 epochs!

60 epochs were given, so can we reduce or increase the epochs, as it was return we can change the epochs.

Also I want to know what is the use of discriminator if anyway it is the human eyes trying to detect at the end for selecting images? The last two weeks assignment I am really finding it confusing as it tries to complicate a simple thing to see a variation in model or image but at the end either use loss function or human eye to match with the grader.

I am really looking forward to hear a good explanation for this part of question @ai_curious

Thank You

I haven’t looked at this code for a long time, and not sure I have an environment that can run it now. I do remember that one of my ‘a-ha’ moments and takeaways was that human eyes aren’t assessing what a hand looks like the same way a CNN does. And ultimately it doesn’t matter what human eyes and brain think unless you’re trying to generate ‘art’. Trust the math

I think that it’s a matter of time and computation, for sure that with a training of hours the model can produce better hands. But, I remember even deepmind had problems generating hands :slight_smile: .

The discriminator is learning at each epoch, so it will better and better identify fake images, at least… if everything works as should work.

Totally agree with @ai_curious, I think i remember that it was better select the closed hands look like a hand or not.

I have spent 3 days running the model, and as I had posted a question related to neural style transfer being too slow, I realised GANs is even more slower than neural style transfer as we have to include all the images and it takes lot of time. I even didn’t sleep one night hoping I will clear this assignment, but no use.

So you are saying what we see is a myth :rofl: :joy: