Week4 Question to the padding of Conv2Dtranspose

I read the question in the following link.

In the Answer of the link,the Mentor said that same padding is used in order not to shrink the volume during the upsampling. I understand that this is true to the conv2d’s padding in the upsampling block,but is it also true to the conv2Dtranspose’s padding? I think in the transpose convolution,we can’t get the smaller output than the input tensor,So why the “same” padding is used in the conv2dtranspose?

Your understanding is correct. Conv2DTranspose output can’t be smaller than input. As far as padding parameter goes, this link should help understand how output size is a scaled version of the input dimension.