@Alexander_Valarus,
It looks like there’s a missing right paren in your first example, but assuming that’s just a typo. In any case, the test cases in the cell after the combine_vectors()
cell should give you a good idea about the issues with these three options. As a general rule, pay careful attention to which line fails in the stack trace and also the comment above the test which should describe what the test is checking for.
For example, with the three lines you posted:
You should see that the first two fail the test with the comment:
# Check that it doesn't break with cuda
The arrow in the stack trace should point to the assert line that is checking if the device starts with cuda
.
The device property can be a bit of a pain in PyTorch, tbh. PyTorch’s default when creating a new tensor is to create the new one with device = cpu
, which is the case if you use .type(torch.FloatTensor). Fortunately, you’ll find that you can generally steer clear of worrying too much about device issues by making use of the functions that preserve the device, such as ones_like(),
zeros_like()and
.float()`. (.float, because it is equivalent to .to(torch.float32), which preserves device. You can check this out for yourself by looking at the float() documentation and then following the link to .to(): torch.Tensor.float — PyTorch 1.13 documentation)
For your 3rd example, torch.cat((x, y), 1).float()
, you should see an error in the test case with the comment:
# Check that the float transformation doesn't happen after the inputs are concatenated
In other words, the test case is specifically checking to make sure you don’t do this. The reason is because torch.cat()
expects the tensors to be of the same type. This is important in this, since one of our vectors comes from one-hot-encoded values, which could easily be integers.