I get the stack trace below.
It looks like the test_gen_loss() call backward on a tensor, not the generator object.
The doc string for get_gen_loss says return a tensor scaler?
any idea how to work around this issues?
RuntimeError Traceback (most recent call last)
Input In [17], in <cell line: 46>()
43 assert not torch.all(torch.eq(old_weight, new_weight))
45 test_gen_reasonable(10)
---> 46 test_gen_loss(18)
47 print("Success!")
Input In [17], in test_gen_loss(num_images)
35 # Check that the loss is reasonable
36 assert (gen_loss - 0.7).abs() < 0.1
---> 37 gen_loss.backward()
38 old_weight = gen.gen[0][0].weight.clone()
39 print(old_weight)
File /usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py:197, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
192 retain_graph = create_graph
194 # The reason we repeat same the comment below is that
195 # some Python versions print out the first line of a multi-line function
196 # calls in the traceback and some print out the last line
--> 197 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
198 tensors, grad_tensors_, retain_graph, create_graph, inputs,
199 allow_unreachable=True, accumulate_grad=True)
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Kind regards
Andy