C1W1 test_gen_loss error : RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I get the stack trace below.

It looks like the test_gen_loss() call backward on a tensor, not the generator object.

The doc string for get_gen_loss says return a tensor scaler?

any idea how to work around this issues?

RuntimeError                              Traceback (most recent call last)
Input In [17], in <cell line: 46>()
     43     assert not torch.all(torch.eq(old_weight, new_weight))
     45 test_gen_reasonable(10)
---> 46 test_gen_loss(18)
     47 print("Success!")

Input In [17], in test_gen_loss(num_images)
     35 # Check that the loss is reasonable
     36 assert (gen_loss - 0.7).abs() < 0.1
---> 37 gen_loss.backward()
     38 old_weight = gen.gen[0][0].weight.clone()
     39 print(old_weight)


File /usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py:197, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    192     retain_graph = create_graph
    194 # The reason we repeat same the comment below is that
    195 # some Python versions print out the first line of a multi-line function
    196 # calls in the traceback and some print out the last line
--> 197 Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    198     tensors, grad_tensors_, retain_graph, create_graph, inputs,
    199     allow_unreachable=True, accumulate_grad=True)

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Kind regards

Andy

These error messages are always a bit hard to interpret. My guess is that you’re missing the meaning there: the error occurs in gradient processing and the error is telling you that you have not enabled gradient computation on the relevant variables. The situation is fundamentally asymmetric: when you compute gradients for the discriminator, you can detach the generator because you don’t need the gradients of the generator. But when you train the generator, you can’t do that, because you need both the gradients of the generator and the discriminator in order to train the generator. That is because the cost is the output of the discriminator. You don’t apply the gradients to the discriminator in that case, but you still need them. Here’s another thread from a while ago that discusses these points.

So my guess is that perhaps you did something like detaching the generator in this case.

Thank you

You nailed it! I removed detach() and it worked. I figure it out by just skipping the test. When I implemented the training loop I noticed it was not learning