Great Class. Really well done
I am confused.
In C1W1 we used nn.BatchNorm1d . We did not need to initialize it or any of the nn.Linear() layers.
My guess is nn.Linear() must randomly init itself. Initialization is not mentioned in the documentation Linear — PyTorch master documentation )
Why do we use initialization in DCGAN?
# You initialize the weights to the normal distribution
# with mean 0 and standard deviation 0.02
def weights_init(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
if isinstance(m, nn.BatchNorm2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
gen = gen.apply(weights_init)
disc = disc.apply(weights_init)
Kind regards
Andy