C1W4A UNQ_C4 error with data types

I am getting an error when trying to call the generator to create the fake image:

RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same

I created noise_and_labels using my combine_vectors() function, which passed the test for float check. I am also printing the data type:

    `print(noise_and_labels.shape, type(noise_and_labels[0][0].item()))`

before calling the generator with this as input, and it shows:

torch.Size([128, 74]) <class ‘float’>

I am confused where it is getting the DoubleTensor data type?

1 Like

Hi, @subha.

I can confirm print(noise_and_labels.shape, type(noise_and_labels[0][0].item())) returns torch.Size([128, 74]) <class ‘float’> for me as well. I suspect there’s something wrong in Generator class code. Please allow me some time to try to reproduce your error. Will get back to you asap.

Hi, @subha.

Will you be able to confirm you’re casting tensors to float (in combine_vectors) as follows x.type(torch.float)? Also will you be able to submit the full error call stack for better understanding because I’m unable to reproduce the error on my end. Many thanks.

I’ve reproduced your error by casting noise_and_labels to torch.cuda.DoubleTensor as follows
fake = gen(noise_and_labels.type(torch.cuda.DoubleTensor))
As a workaround I suggest to cast noise_and_labels to torch.cuda.FloatTensor as follows
fake = gen(noise_and_labels.type(torch.cuda.FloatTensor)).
Many thanks.

1 Like

@Dmitriy_Khvan thank you so much for the hint! I was doing this in combine_vectors:

combined = torch.concat([x, y], dim=1).to(float)

After making the type torch.float it works!

Despite passing the tests I got 0 grade upon submission. After some additional attempts based on the error messages I realized that
(1) the grader uses a PyTorch version that does not have concat, I had to change to torch.cat instead, and
(2) in grader’s test the combine function can get inputs of different types, so one must convert the inputs before concatenation, and not the combined output.

@subha good observations thanks for sharing. So did you manage to complete the assignment?

Yes, I worked around the issues and completed the assignment. Thanks again for your guidance.

I too ran into this problem. Even though my combine_vectors passed with torch.tensor(torch.cat((x,y), dim=1), dtype=torch.float64), I had to cast the output as suggested above, using gen(noise_and_labels.type(torch.cuda.FloatTensor))