for _ in range(crit_repeats):
### Update critic ###
crit_opt.zero_grad()
fake_noise = get_noise(cur_batch_size, z_dim, device=device)
fake = gen(fake_noise)
crit_fake_pred = crit(fake.detach())
crit_real_pred = crit(real)
epsilon = torch.rand(len(real), 1, 1, 1, device=device, requires_grad=True)
gradient = get_gradient(crit, real, fake.detach(), epsilon)
gp = gradient_penalty(gradient)
crit_loss = get_crit_loss(crit_fake_pred, crit_real_pred, gp, c_lambda)
In the above code, we can see that for each iteration we are initializing epsilon. But why does it have requries_grad=True?
Because we are trying to calculate gradients of the critic(mixed_images) w.r.t mixed_images.
mixed_images = real * epsilon + fake * (1 - epsilon)