These images are coming from a pre-trained Generator (“pretrained_celeba.pth”). How long/well was this trained? Is this quality assumed to be ok for the tasks we are doing? In the Perceptual Path Length (PPL) notebook we supposed to be lerping between two random w-spaces. Is it because these randomly generated w-spaces happen to not have a high fidelity, that they look so blotchy? Or is it a case that the pre-trained Generator has been loaded from an early snapshot in it’s training process?