I understand that the forward operation of transpose convolution is similar to the backward operation of regular convolution, and vice versa, as it relates to determining Z[l] (forward) or dA[l-1] (backward). I also know that the gradients of the weights and biases are also measured during the back propagation of the regular convolution step. Further, I know that pooling does not have any trainable weights, but we must find the gradients with respect to A[l-1] for the benefit of previous layers.
My question is: When performing back propagation on a transpose convolution step, are there any weights and bases to be updated? I suspect that there are, but I can’t find any information on what the formulas would be.