Why do we use padding ‘same’ in Conv2DTranspose layer? To my understanding, the idea of using padding is to maintain the dimension from the in put, in the output(as explained below in the reference from TensorFlow). This is the opposite of what we are trying do when Upsampling or when using transpose convolution.
What is the difference between ‘padding’ and output_padding?
Is my previous question related to this difference?
Could anybody please help and explain this difference in further detail?
Many thanks!
taken from tf.keras.layers.Conv2DTranspose | TensorFlow Core v2.4.1
padding
:
one of "valid"
or "same"
(case-insensitive). "valid"
means no padding. "same"
results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
output_padding
:
An integer or tuple/list of 2 integers, specifying the amount of padding along the height and width of the output tensor. Can be a single integer to specify the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to None
(default), the output shape is inferred.