Question about style GAN

In the description of style GAN, particularly when discussing AdaIN, the instructor introduces learned parameters y_s and y_b and states that:
“after each convolution stage there is AdaIN stage where previous convolution normalization removes the input style and only keeps content, while new learned scaling parameters Y_s,i and Y_b,i are introduced at that AdaIN stage”.
My question is, why did we style-shape outputs at previous AdaIN stages only to remove style during normalization at the current AdaIN stage?
Thank you
DS

if you understood previously in course 1

it is basically Discriminator with x real data===being compared with generator with y data mimicking x, which yields a better model result with a more superior model X which again generator tries to mimic this superior model X as Y trying to fool Discriminator with the superior model X data which in turn generator creates a YY data which is more superior copy of the superior model i.r.t. x, hence keeping the previous stage output would only create confusion or not to improve the quality of the model, so the previous Adain stages style is removed.

While this process goes till both discriminator and generator have learnt enough and created a better model, keep previous style might have been removed for two reason, to reduce the memory usage and let the discriminator and generator create a better model based on the present outcome rather than the previous outcome.

Hi Deepti,
Thank you very much for your reply, but I do not see how it answers the original question.
Again, according to instructor, while instance normalization removes previous style and keeps content, new style+bias rescaling is added afterwards. Only to be removed by the following stage instance normalization. Why do we do it? Why not only add style/bias to the final stage and not have it removed?
Thank you
DS

I think my misunderstanding is that style cannot be completely removed. Because after every convolution there is nonlinear activation stage, so by pure math subsequent renormalization (which is linear in nature) can never remove previous stage styling; it can only make some adjustments to ensure stable training and may be partial obfuscation of previous AdaIN stage styling. But then saying that instance normalization removes input style is a bit misleading; may be it just scales down style while enhances content. :bulb:

1 Like

yes, previous insight seems correct; this is from ChatGPT:

1 Like