Convolution and Padding

I understood from video 1 of week 1 of the convolution neural networks course that convolution is used to solve the following challenge: working with high res images (example 1000 x 1000 x 3) may require very large amount of data to prevent overfitting. In addition it may require a large training time. For this reason convolution is used.

Then wouldn’t ‘same padding’ create the original problem by conserving the size of the input image?

Dear Bashar,

Thank you very much for your question. If you refer to TensorFlow’s documentation for padding you can find the following:
“With 'SAME' padding, padding is applied to each spatial dimension. When the strides are 1, the input is padded such that the output size is the same as the input size.”
Thus what you mentioned about the output size is correct, in case we have strides set to one. Moreover, it is pointed out that:
"In the 2D case, the output size is computed as:

out_height = ceil(in_height / stride_height)
out_width  = ceil(in_width / stride_width)

“”
So in a short, the output dimension depends on how you set your hyper-parameters of your neural network architecture :slight_smile:
If you want to read more about padding please use this link.

All the best,
Kiavash

1 Like