C3W3_Assigment - UNQ_C5 - get_gen_loss()

Please, help…

I passed UNQ_C2, UNQ_C3 and UNQ_C5, but now I’m struggling with UNQ_C5.

My results are:

adversarial_loss_A: tensor(25272160.)
adversarial_loss_B: tensor(24123432.)
total_adversarial_loss: tensor(49395592.)

identity_loss_A: tensor(473360)
identity_loss_B: tensor(456036)
total_identity_loss: tensor(54834364.) [torch.mul(identity_loss_A + identity_loss_B, float(lambda_identity]

cycle_loss_A: tensor(38803686)
cycle_loss_B: tensor(35603382)
cycle_loss: tensor(3943574784.) [torch.mul(cycle_loss_A+cycle_loss_B, float(lambda_cycle)]

gen_loss: tensor(4047804672.)

I don’t know if I have to detach something, if I have to use ‘.to(device)’ somewhere… I don’t know. I’d appreciate some help here, thanks.

Hi @MarcosMM,
I haven’t dug into this one, but try the debugging technique I suggested for your previous question about get_disc_loss - looking at what the unit test is expecting and comparing that with your results to see if you can figure this one out on your own.

If you’re still stuck after trying that, write back here and share what error message you’re getting from the unit tests and I’ll take a closer look at the values you shared to see if I notice any hints about what might be wrong.

1 Like

Hi @Wendy,

I took your advice and tried to debug my results again. The fact is that my values were correct, but I noticed that the adversarial loss results were floats instead of ints. If I sum the tensora extracting their values with ‘item()’, the resulting sum is the expected value.

Also, I saw this post, but I didn’t like the idea of casting anything…

My ‘bug’ was that in the adversarial loss function I was defining the labels like torch.ones(tensor_X.shape) instead of using torch.ones_like(tensor_X), whis solves the ‘typing’ issue here. Now it works and the model trains. Silly mistake, but it was breaking my head…

Thanks Wendy!

3 Likes

Those _like functions can be very handy!

1 Like

Hi. Thanks. Faced a similar issue. This helped me resolve it. I am curious though, why does torch.ones(disc_pred.shape) and torch.ones_like(disc_pred) result in quite different results? Thanks again.