Transposed Convolution over Volume

In the lecture video transposed convolution was illustrated with a 2-D example. What does the computations involved in transposed convolution over volume look like? Is there any similarities with regular convolution over volume?

1 Like

Prof Andrew Ng actually covers this in a later video:

1 Like

I think that video was discussing convolutions over 3-D images not transposed convolutions over 2-D images with multiple channels? From the Transposed Convolutions video in Week 3 below, how would the computation be worked when the filter and input have multiple channels?

1 Like

Sorry, my mistake. I did not notice you were talking about 2D transposed convolutions :slight_smile:

The multi-channel extension of the transposed convolution is the same as that for ordinary convolution. The number of channels for each filter needs to match the input channel size n_C_prev. Each filter reduces all channels to a 2D matrix. You then have n_C of these so that you end up with a 3D matrix of depth n_C.


It is not easy to find a good image, but maybe this one will help:

from Electronics | Free Full-Text | Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs | HTML


So when we have multiple channels, we apply what we did in the lecture videos for each channel, then add up the result along the channel axis to get a 2D matrix?

Yes, you can do that. Or you do all the channels at the same time for the current 3D volume you are currently calculating using the full 3D filter. You end up with the same result. It is an implementation detail.

1 Like