I am currently working on C1W4B_Controllable_Generation. Now that I know how controllable GANs work, I am having trouble understanding how we Update the feature noise with the help of Stochastic Gradient Ascent. I don’t really understand how SGA helps improve the maximization of the selected features.
Additionally, I’m unclear about how the "class_penalty " is calculated in the Entanglement and Regularization. How is the mean of the classification score used to compute this penalty, and what implications does it have for maintaining feature balance in the generated outputs? Any insights or explanations would be greatly appreciated!
Hey @vapit
This is a good question, but the answer is not so easy to explain, in my opinion. But, lets go.
In the course, you have seen that by iteratively updating the noise vector, you guide the generator to produce outputs that increasingly reflect your desired features.
So, here is how you get the calculations and the results. Remember that the “class_penalty” helps to enforce balance in the generated features.
The mean of the classification score: when generating samples, you can run them through a classifier to obtain scores for different classes. The class penalty might computed as:
class_penalty = mean(f(x))
where the f(x) is the classification score output for the generated sample x. This mean value reflects how well the generated samples align with the target classes.
2. The implications: The class penalty is designed to encourage diversity in the generated outputs:
Maintaining the feature balance: if one class dominates the score, the class penalty can discourage that imbalance by imposing a penalty for low diversity. This helps ensure that te generator explores a broader range of feature combinations, rather than overfitting to a single class or feature.
Regularization effect: it acts as a regularizer, preventing the model from getting too confident in one aspect while neglecting others. This balance is crucial for producing high-quality, diverse outputs in controllable GANs.
So, in summary, this combination allows the controllable GANs to effectivelly manage feature influence and balance in the generated data.