C2W3 - UNQ_C3 and UNQ_C4 errors

Hello guys

Today I will ask you to help me with my mistakes :smiley:

UNQ_C3
in InjectNoise class, as nn.Parameter:

{moderator edit - solution code removed}

As forward instructions:

{moderator edit - solution code removed}

But I get the error:

AssertionError                            Traceback (most recent call last)
<ipython-input-204-8b78854b35cd> in <module>
     14 # Check that the change is per-channel
     15 assert torch.abs((inject_noise(fake_images) - fake_images).std(0)).mean() > 1e-4
---> 16 assert torch.abs((inject_noise(fake_images) - fake_images).std(1)).mean() < 1e-4
     17 assert torch.abs((inject_noise(fake_images) - fake_images).std(2)).mean() > 1e-4
     18 assert torch.abs((inject_noise(fake_images) - fake_images).std(3)).mean() > 1e-4
AssertionError: 

UNQ_C4
in AdaIN class, the init code:

{moderator edit - solution code removed}

As forward instructions:

{moderator edit - solution code removed}

But i get the error:

AssertionError                            Traceback (most recent call last)
<ipython-input-203-30b30eb31fdf> in <module>
     23 test_w = torch.ones(n_test, w_channels)
     24 test_output = adain(test_input, test_w)
---> 25 assert(torch.abs(test_output[0, 0, 0, 0] - 3 / 5 + torch.sqrt(torch.tensor(9 / 8))) < 1e-4)
     26 assert(torch.abs(test_output[0, 0, 1, 0] - 3 / 5 - torch.sqrt(torch.tensor(9 / 32))) < 1e-4)
     27 print("Success!")
AssertionError: 

I feel strange because I am pretty sure i have understood everything but probably not :frowning:

Do you have some clues for me?

Best regards Samir!!

For the InjectNoise function, your parameter looks correct, although you could have written it more simply by using torch.randn. But your noise_shape is wrong for the next part. It’s the “mirror image” of the shape the parameters section, although maybe that’s too mysterious a way to say it. How about this: one of the dimensions needs to be 1. Guess which one?

For the AdaIN section, you got the init part correct as far as I can see. But your code for the transformed_image is quite a bit different than mine. What is going on with the torch.mean divided by torch.std for the normalized image? I thought it was already normalized. They gave you that code, right?

Hello Paul
I really appreciate your answers and the time you have put on it, I will try to adjust my code and I hope everything will be fine, thank you.
see you next time!!!

Everything fine and Everything is done (thanks to you), the UNQ_C3 was a mistake of mine, because I was tired I suppose.

For the UNQ_C4 to be honest I am not very good in math, but with some logic we succeeded… thank you again!!

Thanks for this hint. I was able to get the UNQ_C3 cell to succeed because of your hint, but I don’t understand why this works. Can you explain why this mirror image idea is correct? I don’t understand how this makes the change per channel. I guess I’m having trouble thinking in 4 dimensions. (I had originally put the 1 in the n_sample parameter position.)

When dealing with tensors for image data in PyTorch or TensorFlow, there are two choices of the orientation of the dimensions. “Channels last” and “channels first”. In all the DLS courses, in which TensorFlow is used, they use “channels last” orientation, which is the default in TF. That means you have you a batch of m samples, it is a 4D tensor with the following dimensions:

samples x height x width x channels

So if you have 100 samples, each of which is a 1024 x 2048 pixel RGB image, then the dimensions will be:

(100, 1024, 2048, 3)

The last dimension is the 3 channels giving the color value for each pixel.

But in PyTorch, the default is “channels first” orientation, so the dimensions of a batch of samples will be:

samples x channels x height x width

So if all the characteristics of the batch of samples were the same as above, you’d have dimensions:

(100, 3, 1024, 2048)