Pooling the reason for rotation invariance

I am trying to make sense of how and why things work. Just building the schemata. For rotation invariance the NN needs to give similar result for several rotation variations. And pooling removes the detail of source location (where it spotted the feature). So having nose on top left or bottom left will both have same value for pooling output.
Somewhat generic and also too specific question.

NN is sensitive to rotation.

Thanks a lot, is that still the case if CNN is trained on also rotated images?
Google gave me pages that say both it is and it is not.
Chatgpt says - While pooling can contribute to rotation invariance indirectly by capturing spatial relationships and generalizing local features, it is not specifically designed to address rotation invariance.

You’re welcome. Please see this link on data augmentation