I understand how Transposed Convolution is applied to an input and that the purpose of it is upsampling, ie. to increase the width and height of the input.

In the U-Net assignment, we use transpose convolution (Conv2DTranspose) with the padding=‘same’ argument. TensorFlow documentation says:

“same” results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.

If our purpose is to upsample, why are we using the ‘same’ padding option?

Answering my own question. I did the following experiment:
Created a 3x3 tensor (X).
Applied a Transposed Convolution to X with a 2x2 kernel, strides of 2, and padding set to ‘same’
The out put turns out to be 6x6
So it seems the ‘same’ argument doesn’t have the same meaning for Transposed Convolution as it does for the regular Convolution

import tensorflow as tf
import numpy as np
X = tf.Variable([[1, 3, 0], [4, 5, 7], [1, 0, 2]], dtype=tf.float32)
X = X[..., tf.newaxis]
X = X[tf.newaxis, ...]
custom_kernel = tf.constant_initializer([[0,1],[2,3]])
output = tf.keras.layers.Conv2DTranspose(1, 2, (2,2), padding='same', kernel_initializer=custom_kernel)(X)
print(X.shape)
print(output.shape)
print(output)