Doubt about question asked in "Putting It All Together"

In one of the questions asked in this lecture, asks about the skill level of the generator and discriminator but does not mention at what level i.e., start/during training, end of training. As at the end of the training, we want the generator to be really good.
Let me know your

thoughts on this.

I think the answer that they should be roughly equal in skill applies at all levels of training. Of course the real goal is the final result of training, but the point is in order to get to that final goal the relative balance between the two needs to persist throughout the process. They start out randomly initialized and then the learning takes place one step at a time and they both gain skill, learning from each other. Prof Zhou discusses this in the lectures, although maybe it doesn’t get mentioned until after Week 1. I remember her making the point that if the discriminator overpowers the generator by literally giving 0 as the answer on all fake images, then the generator can’t learn because the gradients are all zero. That means the gradients provide no information about which direction to move in order to improve and generate an image that looks more plausible.

The other high level point here is that the training is not guaranteed to work: you can have problems where the balance between the two is lost and you get “mode collapse” and the training just fails. Figuring out how to avoid that situation or deal with it when it happens is one of the important topics that will be covered as we go through the various courses here. So please “stay tuned” and you’ll hear much more on this from Prof Zhou.