Transpose convolution backprop question

I understand that the forward operation of transpose convolution is similar to the backward operation of regular convolution, and vice versa, as it relates to determining Z[l] (forward) or dA[l-1] (backward). I also know that the gradients of the weights and biases are also measured during the back propagation of the regular convolution step. Further, I know that pooling does not have any trainable weights, but we must find the gradients with respect to A[l-1] for the benefit of previous layers.

My question is: When performing back propagation on a transpose convolution step, are there any weights and bases to be updated? I suspect that there are, but I can’t find any information on what the formulas would be.

1 Like

What do you mean by “transpose convolution”?
Regarding your question, Yes, the parameters are updated during the back prob with the following formula:

W = W - \alpha dW
b = b - \alpha db

1 Like

Transpose convolution in a CNN. Upsampling an image during the expansion phase of U-net. I’m looking for a specific definition of dW and db.

1 Like

Alright. There are several threads on gradients. Check these threads: one, two, and three.

1 Like