Before I ask, I am grateful to have a platform like this with tonnes of passionate people and mentors. And I really don’t know that whether my question is good enough or valid to be posted.
So, suppose in a hypothetical CNN which I am using for facial recognition, how do I decide on the filter size (kernel size) if the faces in the my training set images are not really at the center. I mean their faces are sometimes offset along width or height from the the image center. Or does it not matter whether faces in the training set images are centered or not?