C1W4 Build a Conditional GAN: def combine_vectors(x, y):

For some reason I can’t figure this out.

def combine_vectors(x, y)

If I combine them without casting, I silently fail the assertion that tests that they’re floats.

If I cast them as floats, I silently fail an unmentioned assertion that tests whether they’re on cuda.

If I push them to cuda, I get the runtime error “expected device cuda:0 but got device cpu” despite the print of the Tensor I’m returning being

tensor([[1., 2., 5., 6.],
        [3., 4., 7., 8.]], device='cuda:0')

EDIT:

I just figure it out. Why did none of the below work?

torch.cat((x.type(‘torch.FloatTensor’), y.type('torch.FloatTensor), 1)
torch.cat((x, y), 1).type(‘torch.FloatTensor’)
torch.cat((x, y), 1).float()

I tried several variations that seemed correct, but the grader didn’t like any of them.

@Alexander_Valarus,
It looks like there’s a missing right paren in your first example, but assuming that’s just a typo. In any case, the test cases in the cell after the combine_vectors() cell should give you a good idea about the issues with these three options. As a general rule, pay careful attention to which line fails in the stack trace and also the comment above the test which should describe what the test is checking for.

For example, with the three lines you posted:

You should see that the first two fail the test with the comment:
# Check that it doesn't break with cuda
The arrow in the stack trace should point to the assert line that is checking if the device starts with cuda.

The device property can be a bit of a pain in PyTorch, tbh. PyTorch’s default when creating a new tensor is to create the new one with device = cpu, which is the case if you use .type(torch.FloatTensor). Fortunately, you’ll find that you can generally steer clear of worrying too much about device issues by making use of the functions that preserve the device, such as ones_like(), zeros_like()and.float()`. (.float, because it is equivalent to .to(torch.float32), which preserves device. You can check this out for yourself by looking at the float() documentation and then following the link to .to(): torch.Tensor.float — PyTorch 1.13 documentation)

For your 3rd example, torch.cat((x, y), 1).float(), you should see an error in the test case with the comment:
# Check that the float transformation doesn't happen after the inputs are concatenated
In other words, the test case is specifically checking to make sure you don’t do this. The reason is because torch.cat() expects the tensors to be of the same type. This is important in this, since one of our vectors comes from one-hot-encoded values, which could easily be integers.