Unq_c5 unique cell identifier

To get test row 8 (

test_x = test_stylegan_block.conv(test_x)

) to work I needed to set:
self.inject_noise = InjectNoise(3000)

which is a bit strange with hardcoding. But, now the next test does not work.
Se below:

RuntimeError Traceback (most recent call last)
in
8 test_x = test_stylegan_block.conv(test_x)
9 assert tuple(test_x.shape) == (1, 64, 8, 8)
—> 10 test_x = test_stylegan_block.inject_noise(test_x)
11 test_x = test_stylegan_block.activation(test_x)
12 assert test_x.min() < 0

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
→ 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)

in forward(self, image)
36
37 noise = torch.randn(noise_shape, device=image.device) # Creates the random noise
—> 38 return image + self.weight * noise # Applies to image after multiplying by the weight for each channel
39
40 #UNIT TEST COMMENT: Required for grading

RuntimeError: The size of tensor a (8) must match the size of tensor b (10) at non-singleton dimension 3

My values for the task are:

  if self.use_upsample:
             self.upsample = nn.Upsample((starting_size, starting_size), mode='bilinear')
         self.conv = nn.Conv2d(in_chan, out_chan, kernel_size, padding=int(out_chan/64)) # Padding is used to maintain the image size
         self.inject_noise = InjectNoise(3000)

What is causing this?

Hi Stefan!

Why do you want to hardcode the channel parameter here? When you pass 3000; the class InjectNoise's constructor will initialize the channels parameter in InjectNoise as 3000, so it will typically work only for channels=3000 ; But we don’t want that, it should be generalized (kindly look at the parameters that are passed into the constructor of MicroStyleGANGeneratorBlock class, it has the required parameter that is to be passed to InjectNoise’s constructor).

Hope you get the point, if not feel free to post your queries.

Regards,
Nithin

Thank you Nithin_Skantha_MGANS Mentor

For sure I know it is bad to hard code values. But, I don’t find any parameter that is reasonable to pass, either in InjectNoise or MicroStyleGANGeneratorBlock .
Here are some of them, and none of them is the correct one.
starting_size: 8
in_chan: 128
out_chan: 64
kernel_size: 3

For sure, one out of these 4 parameters you stated, is the answer (the parameter that you have to pass to the InjectNoise’s object.
Try to find the meaning of all the parameters ( it will be explained in the cell ), you will get it. Otherwise, if you still can’t grasp it, take a quick recap of lectures, it will help you get more conceptual clarity.

Regards,
Nithin

Nithin_Skantha_MGANS Mentor

I have also tried with: test_noise_channels, that has the value 3000. But, the error is still the same. I think that is strange.

self.inject_noise = InjectNoise(test_noise_channels)

I have also tried other variables.

No, please don’t try random variables which are not passed to the constructor.

Look at the highlighted parameters, it has got to be one of these and there is nothing strange about it. Also, take a look at the description so that you get an idea of what these parameters are referring to.
Injection of noise equates to this: image + self.weight * noise [Take a look at the InjectNoise class] —> And by creating an Object InjectNoise(x) , by the class definition, self.weight will have x number of channels. To be added to the image, it has to have the same number of channels as that of the image → so the x has got to be the previous conv layers’ output’s number of channels.

Hope you get the idea.

Regards,
Nithin

Nithin_Skantha_MGANS Mentor

I guess the error was related to a previous block.
Now I have:

if self.use_upsample:
            self.upsample = nn.Upsample(size=(starting_size,starting_size), mode='bilinear')
        self.conv = nn.Conv2d(in_chan, out_chan, kernel_size, padding=1) # Padding is used to maintain the image size
        print('starting_size: ',starting_size)
        print('in_chan: ', in_chan)
        print('out_chan: ', out_chan)
        print('w_dim: ', w_dim)
        
        print('kernel_size: ', kernel_size)
        print('QQQ ', type(inject_noise))
        
        self.inject_noise = InjectNoise(out_chan)  
        self.adain = AdaIN(out_chan, w_dim)
        self.activation = nn.LeakyReLU(0.2)

and the error becomes:

RuntimeError                              Traceback (most recent call last)
<ipython-input-69-4185a1a0f935> in <module>
     15 print('test_x: ', test_x.shape)
     16 print('test_w: ', test_w.shape)
---> 17 test_x = test_stylegan_block.adain(test_x, test_w)
     18 
     19 foo = test_stylegan_block(torch.ones(10, 128, 4, 4), torch.ones(10, 256))

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

<ipython-input-54-ef121e96d8a1> in forward(self, image, w)
     34         '''
     35         normalized_image = self.instance_norm(image)
---> 36         style_scale = self.style_scale_transform(w)[:, :, None, None]
     37         style_shift = self.style_shift_transform(w)[:, :, None, None]
     38 

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
     85 
     86     def forward(self, input):
---> 87         return F.linear(input, self.weight, self.bias)
     88 
     89     def extra_repr(self):

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
   1608     if input.dim() == 2 and bias is not None:
   1609         # fused op is marginally faster
-> 1610         ret = torch.addmm(bias, input, weight.t())
   1611     else:
   1612         output = input.matmul(weight.t())

RuntimeError: size mismatch, m1: [1 x 256], m2: [3 x 2] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41

The sizes of test_w and test_x are:
test_x: torch.Size([20, 64, 8, 8])
test_w: torch.Size([1, 256])

I thought this was the correct setting: AdaIN(out_chan, w_dim)
Do you have any suggestion about this?