Pix2Pix not producing desired output after training!

After completing the assignment (passing with grade 100%), I wanted to test out the performance of the model. Below is the code that I am using to check the generated images:


input_dim = 3 # Number of input channels
real_dim = 3 # Number of output channels
target_shape = 256 # Target shape for the image

gen = UNet(input_dim, real_dim).to(device)
loaded_state = torch.load(“pix2pix_15000.pth”)
gen.load_state_dict(loaded_state[“gen”]) # Adjust the path to your saved model
gen.eval() # Set the model to evaluation mode

transform = transforms.Compose([
transforms.ToTensor(),
])

dataset = torchvision.datasets.ImageFolder(“maps”, transform=transform)

dataloader = DataLoader(dataset, batch_size=4, shuffle=True)

for image, _ in tqdm(dataloader):
image_width = image.shape[3]
condition = image[:, :, :, :image_width // 2]
condition = nn.functional.interpolate(condition, size=target_shape)
condition = condition.to(device)
with torch.no_grad():
fake = gen(condition)
show_tensor_images(fake, size=(real_dim, target_shape, target_shape))


However, I can only see blank/White images as output. I am using the pretrained model (pix2pix_15000.pth) provided in the notebook. During the training however, I could see the Maps images being generated.

Kindly provide a suggestion to as how can i evaluate the GAN!

Lab ID: nmlncolxssuo

Hi @Jatin_preet_singh,
Did you figure this out? My guess is that the problem is with the call to gen.eval(), which I think would effectively skip the batchnorm, which the UNet model is relying on. Try without the gen.eval() and see what happens. You shouldn’t really need it anyway.

Also, if you want to run this after you’ve trained the generator, you should be able to just use the same gen you just trained and can skip making a new one and loading the saved state (which is the state before your training).

Thanks Wendy.
It worked!
Yup, makes sense. No need to load it again.
Again, really appreciate your help

1 Like