“You’ll need to fill in find_optimal
to find these parameters to solve this part!”
“You can hard-code/brute-force these numbers if you would like, but you are encouraged to try to solve this problem in a more general way”
Hello, I wonder if it is possible to use the autograd functionality to find the optimal probability p_real
Something like this ( I know it is wrong, but does it goes in the correct direction?)
lrF = 0.0002
n_epochsF = 20
p_real = 0.5
opt = torch.optim.Adam(p_real, lr=lrF)
for epoch in range(n_epochs):
print("main epoch ", epoch)
print("p_real ", p_real)
opt.zerograd()
loss = 1 - eval_augmentation(p_real, gen_names[0], n_test=20)
loss.backward(retain_graph=True)
opt.step()
Thank you very much for your answer
Hi Francesc_Folch!
Hope you are doing well.
eval_augmentation() function just returns a score value.
Also, the loss function here (what you have shown) is 1 - eval_aug(). Say, that you are minimizing this loss, the derivatives here will always be zero, it is like minimizing a constant (which we can’t). The function call - eval_aug() will return a constant and constant (1) - constant (eval_aug()) is a constant.
Hence, the approach that you are taking leads to nowhere. So it isn’t in the correct direction. But it is good to experiment with such things and there is nothing wrong in making mistakes, we will learn something new. Keep going.
Regards,
Nithin
Thanis for your answer
So, whcih would the more general way to solve the pòrblem, instead of brute force?
Thanks in advance