C4 W4 A2: Art_Generation_with_Neural_Style_Transfer Apply Gradients

Exercise 6 - train_step/train_step function lines 29 and 31

grad = tape.gradient(J, generated_image)
optimizer.apply_gradients([(grad, generated_image)])

why do we pass both grad which contains pairs of cost function and generated image and again generated image when we use “apply_gradient” method above?
from Tensorflow documentation, the arguments are

grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.

in the lab code, does the [(grad, generated_image)]) correspond to “grads_and_vars” argument? if so, why do we repeat generated_image? or is generated image correspond to a different arg?

https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/Optimizer#apply_gradients

Yes.

When tape.gradient is called, J is the “target”, and generated_image is the “sources”: