Hi everyone, I am trying to fix a bug on my assignment, but I have done so much changes that I lost the last version of the code. It is possible to reset the work space or someone explain how the whole project is good, but then in the last part I am getting this error:
RuntimeError: Given groups=1, weight of size [64, 1, 4, 4], expected input[128, 11, 28, 28] to have 1 channels, but got 11 channels instead, this the unit test code for the assigment, but I am stuck and I cannt find the solution, someone can give some tips thank you
Thank you, I was able to reset my notebook, but I still have an issue with the last part of the assignment. I donāt understand why the dimensions of the discriminator are not the same as the argument that I am passing. This is what I pass into the discriminator:
And I am getting this error: RuntimeError: Given groups=1, weight of size [64, 10, 4, 4], expected input[128, 11, 28, 28] to have 10 channels, but got 11 channels instead.
But I donāt know how to adjust the dimensions to meet the discriminator forward pass.
The Conditional GAN and Controllable Generation are two separate assignments, right? Please be a bit more specific about which assignment and which function is throwing the error. One helpful thing would be to show us the actual exception trace you are getting. The full trace, not just the error message, I mean ā¦
It looks that discriminator dimensions for the forward pass are incorrect, I am not sure if I need to change the discriminator or how to solve the issue.
The unit test for the other parts of the code are fine and give success!! message. On the other hand, the code here: UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) GRADED CELL is giving a error and I am not sure how to solve the error.
Interesting. Your dimensions for the input to the discriminator agree with mine, so the error must be in how the Discriminator is instantiated. They gave you that block of code right before UNQ_C4, but it depends on your get_input_dimensions function being correct. I added a print statement to that āinstantiationā cell:
generator_input_dim, discriminator_im_chan = get_input_dimensions(z_dim, mnist_shape, n_classes)
print(f"gen_dim {generator_input_dim} disc_dim {discriminator_im_chan}")
gen = Generator(input_dim=generator_input_dim).to(device)
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator(im_chan=discriminator_im_chan).to(device)
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
def weights_init(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
if isinstance(m, nn.BatchNorm2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
gen = gen.apply(weights_init)
disc = disc.apply(weights_init)
I donāt obtain the same result, however I would like to share one particular issue:
discriminator_im_chan = mnist_shape[0] * n_classes #this is correct, but it does not pass the unit test when disc_dim==21 unless I set the discriminator_im_chan =1 and If I do that then I got a error later in the code.
Your code in get_input_dimensions is incorrect. Hereās the relevant sentence from the instructions:
For the discriminator, you need to add a channel for every class.
Of course multiplication actually can be interpreted as successive addition, but it is a mistake to use * as the operator in computing the discriminator dimension there. Now the question is why the test case for the function still passes. Oh, I see, you modified the test case to conform to your incorrect result.
There is a lesson to be learned here: that does not end well. When the test fails, the solution is not to change the test: it is to figure out why your code fails the test. As a point of information, these courses have been in operation for (I think) about 4 years by now. Any bugs in the tests would have been reported and corrected by now.
The code compiled without error, but after I submitted the code, I received an error message on the grade page:
Cell #UNQ_C4. Canāt compile the studentās code. Error: RuntimeError(āExpected object of scalar type Float but got scalar type Long for sequence element 1 in sequence argument at position #1 ātensorsāā,
Yes, that is a known problem caused by the fact that the grader apparently uses an older version of PyTorch than the notebooks do. Hereās a thread which explains the issue and how to solve it.
Note that there is quite a lot of useful history on the forum. Since things have been running for a while, thereās a pretty high chance that other people have stepped on the same landmine. E.g. if you enter the exact error string as a search term (in quotes), youāll find 10 or 20 threads about this issue.
Mind you, Iām happy to help with this kind of thing, but I tell you the above just because it can save you some time if none of the mentors respond in a timely fashion. My timezone is UTC -8 this time of year and the mentor support for GANs is a lot sparser than for the more popular courses like MLS and DLS.