How can the convnet always end up with a particular dimension?

A bit off-topic but how can the convnet always end up with a particular dimension say 3* 3* 8 for varying image dimensions
Do we need an image preprocessing step in place before feeding it to convnet?

Convolutional layer does not adapt. To have a specific output dimension, you need to give it a certain input dimension. If you have images of varying sizes, a preprocessing can standardize them into the same, required dimension.

@Thala, I have split this into a new topic for you, but next time, please open a new thread for a different question.

Sureā€¦Thanks!