Considering the need to explain or have interperability of a models predictions are there considerations when scaling or transforming features?

I’m curious if there are known caveats and tradeoffs with complex feature engineering where it may produce better performing model, however also a decreased ability to make human centered explanations.