The meaning of "same conv"?

I am in the class of “Week 2 >> Classic Networks”. The professor mentioned “same conv”. Maybe I missed something from previous classes. So I want to double check:

The “same conv” just means the size of the output will be the same as the input (by padding only), Is this correct ? In the picture below, the input was 27x27x96, output is 27x27x256. So, they are all same size 27 (#filter doesn’t matter here). Am I correct ? Thank you

Yes, I agree with you on the meaning of ‘same convolution’. The resulted output size is (n+2p-f)/s+1. Maybe it is also dependent on the stride (s). Actually, if you google 'what is same convolution, the first result has a bit of explanation and it says that s is typically set to 1, if that is the case then ‘by padding only’ makes sense.

oh, allow me to rephase then, so the same conv means the output size will be the same as the input size by padding + striding, but no. channel not count here. Is this right ? thank you

Yes, I think the number of channels does not count, ‘same’ here only refers to the size. The number of output channels is solely dependent on the number of filters.

1 Like

RIght, the channels are independent. But be sure not to miss the key point that Kezhen made here:

If you specify “same” padding in TensorFlow, the only case in which the h and w dimensions actually stay the same is if the stride is 1. If the stride is > 1, the padding is calculated for “same” with stride = 1, which means the h and w dimensions are actually reduced. Here’s a thread which discusses this in a bit more detail.

1 Like

Here is also another thread that might be interesting for you: How to Calculate the Convolution? - #2 by Christian_Simonis

Best
Christian

1 Like