Convolutions: why pad with 0s instead of "repeat"?

Didn’t see this answered anywhere, but why pad an image with all 0s around the border? Dr. Ng just said "we commonly pad by setting those inputs to 0.

It seems like that distorts the “intent” of the original image too much. For example, if your network learned an edge detector, it would detect an edge there, but there isn’t really one, just the artificial one created by the concept of the image edge.

Why not use a “repeat” pattern, where the borders just repeat the same pixel value that they lie against? It seems like for most images that would be more representative. Thanks.

I think if you pad each image with 0’s (I think he is using 0s to have minimal effect overall) then this padded region will be the same for each image like lets say a background and wont become a distinctive feature that distinguishes each image from each other.

If you just extend whatever the image has in its borders then each image might have different borders so it can increase the chance of it becoming a distinctive feature, which the model can use to learn and affect its output.

1 Like

In addition to @gent.spah’s excellent reply:
Padding with zeros has some nice characteristics:

  • It’s quick to implement.
  • We don’t have to decide how many pixels to copy to create a repeating pattern.
  • The choice to pad with zeros seems to have very little impact, as there tends to be little useful information near the edges of an image. This is largely due to how images tend to be composed when they are created.
1 Like