Code Cell UNQ_C1: Unexpected error (IndexError

I am getting the following feedback from the grader “Code Cell UNQ_C1: Unexpected error (IndexError(‘The shape of the mask [0] at index 0 does not match the shape of the indexed tensor [9999, 1] at index 0’,)) occurred during function check…”
The check in the notebook runs with success. I also tried to combine two tensors of the shape samples,a,b (a, b integers), samples, a,b,c and even samples,a,b,c,d. All of this work without error

Do you have any hints for me how to debug my function?

Just as a matter of general principle: if you pass the tests in the notebook, but fail the grader, the first thing to look for is some type of “hard-coding” or referencing global variables instead of the parameters actually passed into your function. The point is your code is somehow not “general” and the grader’s test case or the grader’s environment is sufficiently different that it causes your code to fail in that context.

I realize that may be too general a statement to be really useful in this case. I don’t normally answer questions about GANs, but I was an alpha tester for the courses. If you can remind me the name of this exercise (C3 W1), I can go back and take a more detailed look at the functions in question.

I went back and refreshed my memory on this assignment. This is Data Augmentation, right? And you’re dealing with the combine_sample function. My guess is that you are making this way more complicated than it needs to be. Did you check the “Optional Hints” they offer? Also if you check the test cases in the following cell, you’ll notice that they use examples in which the input tensors have different numbers of dimensions in the different test cases, but the first dimension is always the “samples” dimension and that’s the only one you need to worry about.

The idea is to create a boolean “mask” tensor that has the same size as the samples dimension of the input tensors. In PyTorch, the “len()” function gives you the number of elements in the first dimension. Then if you index a tensor with 1 dimensional value (mask), you’ll be indexing it along the first dimension (index 0). Meaning that you don’t have to worry about how many additional dimensions there are.

The len() does not exist. I do the following:

  1. generate copy of one of the input tensors “first”
  2. compute the length of the first dimension via first.size()[0]
  3. Compute a tensor of the above length consisting of true and false drawn via replace = (torch.rand…)
  4. Replace in the copy from step one via indexing the elements for which the tensor via copy[replace] = …

The tests are always turn out fine.

Interesting. That all sounds correct to me. Well, I didn’t have a problem with len(), but your formulation is also perfectly correct. Here’s a little test cell I added to my notebook:

# Play cell
foo = torch.ones([3,4,5,6])
print(f"foo.shape {foo.shape}")
print(f"len(foo) {len(foo)}")
print(f"foo.size()[0] {foo.size()[0]}")
bar = foo[0]
print(f"bar.shape {bar.shape}")
print(f"len(bar) {len(bar)}")

Running that gives this result:

foo.shape torch.Size([3, 4, 5, 6])
len(foo) 3
foo.size()[0] 3
bar.shape torch.Size([4, 5, 6])
len(bar) 4

Are you sure you don’t reference any global variables, only the arguments to the function? Beyond that, I’m out of theories.

Well, with \epsilon more thought, I added a print statement to the function to check the shape of the computed mask. Here’s what I see when I run the test cell:

fake_mask.shape torch.Size([9999])
target_images.shape torch.Size([9999, 1])
fake_mask.shape torch.Size([9999])
target_images.shape torch.Size([9999, 10, 10])
foosum.size() torch.Size([9999])
fake_mask.shape torch.Size([9999])
target_images.shape torch.Size([9999, 1])
fake_mask.shape torch.Size([9999])
target_images.shape torch.Size([9999, 1])
fake_mask.shape torch.Size([9999])
target_images.shape torch.Size([9999, 10, 10])
Success!

How does that compare to what you get?

My interpretation of the grader error message is that it’s saying that the mask is not that shape. Can you think of any reason why that would happen?

Maybe one other possibility is that the graders here do not do an automatic “Save” for you. I have not tested that hypothesis here. Over in DLS some of the graders do the save and some don’t. In the case that they don’t, it’s possible that the grader is running an old version of your code. In other words, WYS may not be WTGG (What the Grader Gets).

Thanks for your help!

I found out what was wrong. I computed n_samples = …size()[0]. I guess this variable was used elsewhere. After I removed this my code passed.

1 Like

Whew! You had me worried there for a while. So it was a “global variables” problem, but not of the sort that I had in mind. It’s great to hear that you found the solution. Onward! :nerd_face:

1 Like

Hi…I was reading your solution. Could you explain why my code is not working? Thank you.

{moderator edit - solution code removed}

That’s a nice elegant way to write the code and it should work, but I think the problem is that you have reversed the meaning of the selection criterion. My understanding is that if the random value is > p_real, you want a fake image. Your code gives a real image in that case, right?

Hi, I made the correction to get the real image if random value is > p_real. Even now, I get the following error. Please help. Thank you.


AssertionError Traceback (most recent call last)
in
6 )
7 # Check that the shape is right
----> 8 assert tuple(test_combination.shape) == (n_test_samples, 1)
9 # Check that the ratio is right
10 assert torch.abs(test_combination.mean() - 0.3) < 0.05

AssertionError:

I hope that you just typed something different than what you were actually thinking. The point is that when the random value is > p_real, then the result should be fake, not real. That’s not what you actually typed …

Oh, wait, I tried implementing your method and I get the same error. If I print out the shape of the input real and the output fake_images with that code, here’s what I see:

real.shape torch.Size([9999, 1])
target_images.shape torch.Size([9999, 9999])

So the output shape is clearly wrong, but more research is required to understand why. My understanding of how torch.where works must be wrong. Stay tuned!

Ok, the fundamental problem is that torch.where operates “elementwise” not “row-wise”, which is what we really need there. It turns out that because the second dimension is 1 in the first test case, it at least doesn’t throw an error, but it ends up interpreting it in a weird way and “broadcasting” to end up with a square result.

Here’s a test cell which demonstrates how this works with a small example:

# Play cell to understand torch.where
real = torch.ones(3,1) * 0.5
fake = torch.ones(3,1) * -0.75
fake_mask = torch.rand(len(real)) > 0.5
print(f"fake_mask.shape {fake_mask.shape}")
print(f"fake_mask = {fake_mask}")
target_images = torch.where(fake_mask, fake, real)
print(f"target_images.shape {target_images.shape}")
print(f"target_images {target_images}")
# Now try it with a 2D tensor of the correct shape
fake_mask = torch.reshape(fake_mask, (3,1))
print(f"fake_mask.shape {fake_mask.shape}")
print(f"fake_mask = {fake_mask}")
target_images = torch.where(fake_mask, fake, real)
print(f"target_images.shape {target_images.shape}")
print(f"target_images {target_images}")
# Now try non-singleton second dimension
real = torch.ones(3,5) * 0.5
fake = torch.ones(3,5) * -0.75
# This now throws an error if you use the 1D mask
# target_images = torch.where(fake_mask, fake, real)
# print(f"target_images.shape {target_images.shape}")
# print(f"target_images {target_images}")
# Now try it with a 2D tensor of the correct shape
fake_mask = torch.reshape(fake_mask, (3,1))
print(f"fake_mask.shape {fake_mask.shape}")
print(f"fake_mask = {fake_mask}")
target_images = torch.where(fake_mask, fake, real)
print(f"target_images.shape {target_images.shape}")
print(f"target_images {target_images}")

Running the above gives this result:

fake_mask.shape torch.Size([3])
fake_mask = tensor([False,  True, False])
target_images.shape torch.Size([3, 3])
target_images tensor([[ 0.5000, -0.7500,  0.5000],
        [ 0.5000, -0.7500,  0.5000],
        [ 0.5000, -0.7500,  0.5000]])
fake_mask.shape torch.Size([3, 1])
fake_mask = tensor([[False],
        [ True],
        [False]])
target_images.shape torch.Size([3, 1])
target_images tensor([[ 0.5000],
        [-0.7500],
        [ 0.5000]])
fake_mask.shape torch.Size([3, 1])
fake_mask = tensor([[False],
        [ True],
        [False]])
target_images.shape torch.Size([3, 5])
target_images tensor([[ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000]])

So what that last section shows is that you can get this to work with torch.where, provided that you first “reshape” the computed mask to a 2D column tensor. Then broadcasting will give you the correct result, even in the case that the input does not have a trivial second dimension. If you just use the default 1D Boolean mask that you get, then it fails.

Hi Paul,

Yes, you’re right… “… when the random value is > p_real , then the result should be fake , not real…”

Thank you for the hint on reshape, however, none of the below options worked for me.

fake_mask = torch.reshape(fake_mask,real.shape)
fake_mask = torch.reshape(fake_mask,real.shape[0])

How do I work this out, please? Thank you.

I get the below error.


RuntimeError Traceback (most recent call last)
in
15 torch.ones(n_test_samples, 10, 10),
16 torch.zeros(n_test_samples, 10, 10),
—> 17 0.8
18 )
19 # Check that the shape is right

in combine_sample(real, fake, p_real)
13 print(real.shape)
14 fake_mask = torch.rand(len(real)) > p_real
—> 15 fake_mask = torch.reshape(fake_mask,real.shape)
16
17 target_images = torch.where(fake_mask, fake.clone(), real.clone())

RuntimeError: shape ‘[9999, 10, 10]’ is invalid for input of size 9999

Note that neither of your reshape lines does what I suggested: for the first test case, you need the mask to be a column vector. The point is you are selecting “by row”, right? But it turns out they have multiple test cases with differing numbers of dimensions, so this is not so easy to implement.

In order to get this to work using np.where, you need the mask to have the first dimension of 9999 and the rest of the dimensions to be size 1, but having the same number of dimensions as the inputs, so that “broadcasting” will work. Given that they have test cases with different numbers of dimensions, maybe this is not really the best strategy. If you drop np.where and the reshaping and just use the mask this way, it worked for me (I’m not supposed to actually write the code for you, but I’ll describe it in words):

Copy the cloned real array to the output.
Then use the mask to replace only the “true” entries with fake inputs.

That would be analogous to coding something like this:

A[A < 0.] = 0.

to implement ReLU. In MATLAB they call this “logical indexing” and it is supported in python as well. It worked for me in this case with the mask as a 1D boolean vector. It correctly applies that to select by the first dimension of the input.

Hi Paul,

I am getting a new error. Could you please advise. Exactly what order needs to be maintained? Thank you.


AssertionError Traceback (most recent call last)
in
35 test_combination = combine_sample(test_reals, test_fakes, 0.3)
36 # Make sure that the order is maintained
—> 37 assert torch.abs(test_combination - test_reals).sum() < 1e-4
38 if torch.cuda.is_available():
39 # Check that the solution matches the input device

AssertionError:

My guess is you are doing the selection in an “elementwise” way, instead of the intended “row-wise” manner. Test your code with some sample vectors like the ones I showed in my examples above and see if you get outputs that have a mix of values in each entry instead of each “sample” being consistently real or fake.

Here’s a test:

# Play cell to test combine_sample
my_real = torch.ones(7,5) * 0.5
my_fake = torch.ones(7,5) * -0.75
my_output = combine_sample(my_real, my_fake, 0.4)
print(f"my_output = {my_output}")

When I run that, here’s what I get with code that passes the tests in the notebook:

real.shape torch.Size([7, 5])
fake.shape torch.Size([7, 5])
fake_mask.shape before torch.Size([7])
target_images.shape torch.Size([7, 5])
my_output = tensor([[ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000]])

Notice that every row is either all real or all fake. What do you see when you run that test with your code?

Hi Paul, I get the below output upon running:

my_real = torch.ones(7,5) * 0.5
my_fake = torch.ones(7,5) * -0.75
my_output = combine_sample(my_real, my_fake, 0.4)
print(f"my_output = {my_output}")

my_output = tensor([[ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [ 0.5000,  0.5000,  0.5000,  0.5000,  0.5000],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500],
        [-0.7500, -0.7500, -0.7500, -0.7500, -0.7500]])
Thank you.

Best regards

Your output looks correct in the sense that you are selecting “real” or “fake” by full samples (rows). But it’s a little odd that your method of doing the random selection doesn’t really appear random: all the real values are followed by all the fake values. Notice that they are intermixed in my implementation. It’s all (pseudo) random, so your result could be with the same logic as mine. Is it repeatable?

Do the test cases and the grader accept your solution?

Hi Paul, thank you for your note. Yes, that even I am not able to figure out. I am getting the below error on test cases. And I am not passing the grades. Thank you.

    filt = torch.rand(len(real)) > p_real
    reals = real.clone()[~filt]
    fakes = fake.clone()[filt]
    target_images = torch.cat((reals , fakes), 0).reshape(real.shape)
 

my_real = torch.ones(7,5) * 0.5
my_fake = torch.ones(7,5) * -0.75
my_output = combine_sample(my_real, my_fake, 0.4)

print(f"my_output = {my_output}")
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-10-8c6f755161fb> in <module>
     35 test_combination = combine_sample(test_reals, test_fakes, 0.3)
     36 # Make sure that the order is maintained
---> 37 assert torch.abs(test_combination - test_reals).sum() < 1e-4
     38 if torch.cuda.is_available():
     39     # Check that the solution matches the input device

AssertionError: