C1W1 test_disc_reasonable

It looks like the output of gen_noise is “tuple”
I just use a return torch.randn(n_samples, z_dim, device=device) when I define gen_noise. How can I revise it?

TypeError Traceback (most recent call last)
Input In [15], in <cell line: 73>()
70 if num_steps >= max_tests:
71 break
—> 73 test_disc_reasonable()
74 test_disc_loss()
75 print(“Success!”)

Input In [15], in test_disc_reasonable(num_images)
11 criterion = torch.mul # Multiply
12 real = torch.ones(num_images, z_dim)
—> 13 disc_loss = get_disc_loss(gen, disc, criterion, real, num_images, z_dim, ‘cpu’)
14 assert torch.all(torch.abs(disc_loss.mean() - 0.5) < 1e-5)
16 gen = torch.ones_like

Input In [14], in get_disc_loss(gen, disc, criterion, real, num_images, z_dim, device)
20 # These are the steps you will need to complete:
21 # 1) Create noise vectors and generate a batch (num_images) of fake images.
22 # Make sure to pass the device argument to the noise.
31 # Important: You should NOT write your own loss function here - use criterion(pred, true)!
32 #### START CODE HERE ####
33 noise = get_noise(num_images, z_dim, device=device),
—> 34 fake_data = gen(noise),
35 prediction_fake = disc(fake_data.detach()),
36 loss_fake = criterion(prediction_fake, torch.zeros_like(prediction_fake)),

TypeError: zeros_like(): argument ‘input’ (position 1) must be Tensor, not tuple

1 Like

Oh! That’s a subtle bug! You’re right it does look like get_noise is returning a tuple, and what you explain you’re returning for gen_noise seems correct. But… look very closely at your noise = get_noise(…) line. Notice the comma at the end. That comma is telling you want python to return a tuple where the first (and only) value in the tuple is the result of get_noise(…).

Try removing the comma and see what happens. I don’t know if you tried adding print statements to print the type of noise when you called it from within get_disc_loss() and also from when you called it from within test_get_noise(). That would have shown you that there was something in the code in get_disc_loss() that was somehow turning the result into a tuple. But, even if you realized that, it would be very easy to overlook that extra comma. I definitely didn’t notice it at first.

1 Like

Thank you for your fast comment! I realized that and corrected all the following cells. However, I have one more question.

I have this error when test get_gen_loss, but I don’t know why it happened… Could you help me?

AssertionError Traceback (most recent call last)
Input In [38], in <cell line: 45>()
41 assert not torch.all(torch.eq(old_weight, new_weight))
44 test_gen_reasonable(10)
—> 45 test_gen_loss(18)
46 print(“Success!”)

Input In [38], in test_gen_loss(num_images)
39 gen_opt.step()
40 new_weight = gen.gen[0][0].weight
—> 41 assert not torch.all(torch.eq(old_weight, new_weight))


1 Like

@iamjiyoung, the test that’s failing was checking to make sure the old_weight was different from the new_weight. It looks like the weight is not getting updated. My suggestion would be to check that you are using detach() everywhere you should (and not in places you shouldn’t). Read the instructions about detach() carefully and check your code.
This is a common thing to get thrown off by, so if you don’t see it, you can try searching for detach in previous posts for this assignment and you will probably find just what you need

Thank you so much! I passed. But one thing, in UNQ_C7, why don’t we need to use detach? In my understanding, we need to detach the generator before using the discriminator. Isn’t it?

The generator’s goal is for the discriminator to be fooled by the fake images the generator creates, so for the generator, the loss is based on the discriminator’s results on the fake image. If we detached, then we’d never see any change and the generator wouldn’t learn anything.

Here’s a good discussion that goes into more detail