GAN generation that is persistent across different perspectives

Hello,

Let’s say we generate a face that we like and want to keep, or a picture of a particular cat, etc. We decide we want to keep that particular face or cat, but generate new images of it. So the images should appear to be from different perspectives, but recognizably the same face / cat / etc. Is this possible with current technology, and is there research on creating GANs like that? A model that can generate something and then make the styles static while allowing other values that affect things like posture and scene to change with new generated images?

Yes, it is actually possible. Considering the scenario you described, you may save your generated images (considerably a good one - for example : Generations of a Style GAN) in an array or a folder and access a particular image and feed it as input to the next GAN. Generally, it would be beneficial if you have a large dataset of a particular image but there are few-shot techniques to help you with smaller datasets. And yes, with current technology, it is possible and there are big research teams working on it too.
Do refer to this wonderful paper: [1905.08233] Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
It’s only one of those papers that can help you with your requirement. There may be a few more too. Do check it out!
Happy Learning!
Regards,
Nithin