Conditional Wasserstein GAN(

Hi! I’m working with CWGAN. I tried to combine Wasserstein loss with gradient penalty and conditional gan. There aren’t any good sources explaining CWGAN. But thanks to you. You guys gave an amazqring explanation on what is conditional gan and on Wasserstein loss. So I was able to put it all together though I don’t know if it’s right or wrong. I am still training the model with the data that I scrapped from Flipkart. It contains five categoriese which are headphones, keyboard, monitor,laptop and mouse. There are around 200-300 images in each category.

My hyperparams are
Batch size=40
Im size =256
Optimizer (both gen and critic)= rmsprop with lr 0.00005 and 0.0005 respectively.

The loss function is as follows:

The Model is trained over 400epochs or 16000 iterations.

But till now the model is not converged. I don’t know if I’m going in the right path. I don’t know whether the model that I build is right. And I don’t know whether my model stuck at model collapse.

I’m also not sure about the implementation of interpolation in conditional gan. The implementation is as follows:

I have added the same labels used for the critic to the interpolated images. Is that right?

I want someone to tell me what I implemented is right or wrong. And guide me to build a stable model. Thank you in advance.please someone help me in this.

Below is the colab link for my implementation:

I’ll add some of the sample images generated after 400 epoch. The samples are uploaded in this format (original and generated) and they belong to these classes headphones, keyboard,laptop, monitor and mouse.

Hey @Darshan_dcode,
Welcome to the community. If you are implementing a GAN on your own for the first time, then the best thing to do according to me is to take reference from the code already provided to us. I think you can easily modify the provided code to your dataset with just a few changes instead of rewriting the entire code by yourself. This will make sure that the errors would be less, and as you implement it more and more, you won’t need to refer to the code anymore.

But keeping that aside, you mentioned that there are only 200-300 images in each category, i.e. around 1k-1.5k images in total. I think this highlights an issue which you must take into account. In order to make your GANs work, you need to have a larger dataset. Even if you compare your dataset size with MNIST, you will find that the MNIST has 70K images in all.

As for the errors, it would be great if you can cross-reference your code against the provided one, it will help you to highlight the mistakes, if any. Hope this helps :innocent:

Hello @Elemento . Thanks for replying. I have built the model with the codes in keras as a reference. The thing is I couldn’t able to find any CWGAN implementation. Though my model is programmatically correct and working I don’t know whether it is logically correct.

Like I don’t know whether the implementation of gradient penalty is correct in the context of conditional gan. It was originally implemented for Wasserstein GAN which is sort of unconditional gan. So no class labels were used. But I’m using class labels with Wloss. So I just want someone to look at my implementation and tell me if it’s right or wrong. It will be a big help for me.

And about the dataset. I agree with you. My dataset is so small. But think about the real world problems. Sometimes you have to work with small datasets like this. Even if you do some augmentation you will not have enough data. So this is what I’m trying to achieve with this model. To generate more samples from existing small or limited amount of data.

So someone please look at my implementation and tell me if going in the right direction. And please I don’t wanna use any transfer learning. I want to train the model on my own.

Thanks in advance.

And thank you @Elemento once again for your input.

Hey @Darshan_dcode,
Sorry, I missed out on the fact that the Conditional GAN that we implemented in the course was without W-loss, and hence, gave you that advice to follow the code provided to us in the course. I will surely try to go through your code whenever I will get some time.

And as for the practicality of this application, when we would have datasets that are this small, I do think transfer learning is a great way to go. I am not really sure as to why you would like to not use it.

Additionally, if you would like to increase your dataset size, augmentations can be a great way. Even if you double your dataset size, which can be easily done with simple augmentations, I think it will show some improvement.

Furthermore, one more thing which you can do is simplify the structures of the generator and discriminator. I am having a hunch on this though. My line of thought is since the dataset size is not very large, having too complex generators and discriminators will eventually lead to over-fitting, hence, reducing the number of parameters in both the generator and the discriminator, might lead to better results. Let me know what do you think about this strategy!