Critic loss high negative values comparable to generator

Hi, I tried running some experiments on a different dataset (one class of quick draw) and observed the following.

The loss of critic is increasing (towards negative values) as the training is proceeding. Compare this to the provided MNIST dataset based formulation where critic’s loss start to hover around zero.

Is critic finding it hard to learn the presented dataset in this case? I assume better architectures for both are in order in the next lessons in this specialization.

What are your thoughts if someone has observed a similar case?

Hey @amarnathsatrawala,

I hope you are doing well. It’s great to see that you are applying your learnings with different datasets, you can also share your work & observations with google colab notebook with everyone for better understanding!

Cheers!
Sharob

Hi Shorab,

Thanks for the encouraging note. I have started to apply the learnings at work and on hobby projects with success. I am actively planning to share the material (e.g. a recent architecture that I designed to approximate the performance of hundreds of NNs of a special type with a single training instead of many).

In this case, I was trying to use WGANs to be able to learn and classify dragon figures into two classes (one type that I was interested in and the rest). However, it doesn’t seem feasible with simple GANs. This helped me venture into the broader world of unsupervised learning.

Sincerely,
Amar

1 Like

Check this notebook for a way to train and explore 100s of models with one model.

https://htmlpreview.github.io/?https://github.com/ansatrawala/pub/blob/main/hypernets/HyperNet-MNIST.html

I have promising results for CNNs as well that I wish to publish soon.

1 Like