C1W4: "This will error if you didn't concatenate your labels to your image correctly"

I am having some trouble seeing what I need to do in #UNQ_C4.

My output (including some printout checks the I did) is:

fake_image_and_labels shape before disc() torch.Size([128, 11, 28, 28])
fake_image_and_labels dtype before disc() torch.float64
real_image_and_labels shape before disc() torch.Size([128, 11, 28, 28])
real_image_and_labels dtype before disc() torch.float64
fake_image_and_labels shape in Update generator before disc() torch.Size([128, 11, 28, 28])
fake_image_and_labels dtype in Update generator before disc() torch.float64

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [13], in <cell line: 16>()
     95 print("fake_image_and_labels dtype in Update generator before disc()", fake_image_and_labels.dtype)
     96 # This will error if you didn't concatenate your labels to your image correctly
---> 97 disc_fake_pred = disc(fake_image_and_labels)
     98 gen_loss = criterion(disc_fake_pred, torch.ones_like(disc_fake_pred))
     99 gen_loss.backward()

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
   1190 # If we don't have any hooks, we want to skip the rest of the logic in
   1191 # this function, and just call forward.
   1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194     return forward_call(*input, **kwargs)
   1195 # Do not call functions when jit is used
   1196 full_backward_hooks, non_full_backward_hooks = [], []

Input In [3], in Discriminator.forward(self, image)
     40 def forward(self, image):
     41     '''
     42     Function for completing a forward pass of the discriminator: Given an image tensor, 
     43     returns a 1-dimension tensor representing fake/real.
     44     Parameters:
     45         image: a flattened image tensor with dimension (im_chan)
     46     '''
---> 47     disc_pred = self.disc(image)
     48     return disc_pred.view(len(disc_pred), -1)

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
   1190 # If we don't have any hooks, we want to skip the rest of the logic in
   1191 # this function, and just call forward.
   1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194     return forward_call(*input, **kwargs)
   1195 # Do not call functions when jit is used
   1196 full_backward_hooks, non_full_backward_hooks = [], []

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py:204, in Sequential.forward(self, input)
    202 def forward(self, input):
    203     for module in self:
--> 204         input = module(input)
    205     return input

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
   1190 # If we don't have any hooks, we want to skip the rest of the logic in
   1191 # this function, and just call forward.
   1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194     return forward_call(*input, **kwargs)
   1195 # Do not call functions when jit is used
   1196 full_backward_hooks, non_full_backward_hooks = [], []

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py:204, in Sequential.forward(self, input)
    202 def forward(self, input):
    203     for module in self:
--> 204         input = module(input)
    205     return input

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
   1190 # If we don't have any hooks, we want to skip the rest of the logic in
   1191 # this function, and just call forward.
   1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194     return forward_call(*input, **kwargs)
   1195 # Do not call functions when jit is used
   1196 full_backward_hooks, non_full_backward_hooks = [], []

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
    462 def forward(self, input: Tensor) -> Tensor:
--> 463     return self._conv_forward(input, self.weight, self.bias)

File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
    455 if self.padding_mode != 'zeros':
    456     return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
    457                     weight, bias, self.stride,
    458                     _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
    460                 self.padding, self.dilation, self.groups)

RuntimeError: Input type (double) and bias type (float) should be the same

I could use a clue. Or maybe posting this will be the magic incantation that makes me see what I’m supposed to do. :laughing:

Hi @logos_masters

This error indicates that your model’s input data type is double but the bias used in the model is float. Try using the same data type for both (as instructed in the notebook).

Hope it helps!

@logos_masters, the size of your fake_image_and_labels looks fine, but it’s suspicious that the dtype is float64. In my implementation, the type is float32 (the default float type). I suspect there’s something in your implementation of combine_vectors() that is throwing things off. Try going back and looking at that function. Check out the hints for the function - you should be able to implement in just one line. And also make sure to just use .float() for converting to float, rather than float64.

1 Like

Yes, that was exactly it, thanks. I had hit the same double-float mismatch when trying make the combine function work, and I solved that by casting to float64. By going back and just changing it to plain old float() it works fine now.

1 Like