Unsupervised way of Disentaglement

In the last lecture of Week 4, Sharon says that there are supervised and unsupervised ways to achieve disentanglement. I wanted to confirm if any of the 2 ways discussed in the lectures is an unsupervised way.

The first way is to use one-got encoded class vectors, for which we need a labeled dataset, and hence, this is clearly an example of supervised learning.

The second way which is to penalize the features other than the target features requires a classifier trained on all the non-target features + target features, which would in-turn requires a dataset with labels for all of these, since without the trained classifier, we won’t get the predictions for non-target features. Hence, once again, this should be an example of supervised learning.

It would help me a lot if someone can validate my above claims. And if both of them are true, then can you please provide some references to study unsupervised methods to do disentanglement of z-space.

Thanks in advance!

Hey @mentor, can you please answer my post!

Hi @Elemento,

The second example in the video is actually an example where you can use unsupervised learning, with no labels. The general idea is to add a regularization term to the loss function that will help encourage the training towards a result where input values are more independent of each other (ie. disentangled).

There are a range of techniques researchers are working on and some can get pretty complex. If you want to dig deeper into this, here’s one paper to take a look at:

Thanks a lot, @Wendy for the explanation :innocent: