Hi @vasyl.delta
I guess you are referring to this:
Features learned by DL often lack interpretability and explainability and often do not have a physical meaning or a direct domain meaning, e.g. also embeddings in some hidden layers.
As a theoretic example for DL:
… the model could learn how edges and contours make a „paw“ or „whiskers“ or other features that are important to identify a cat and low level features like edges are hierarchically combined and enhanced to describe more advanced patterns to finally form objects, which contribute to the classification if we see a cat on the picture or not.
see also this thread: New 1000 images after model development (train/dev/test), where to add? - #11 by Christian_Simonis
In reality, things are not always so good to interpret when it comes to DL and in general the explainability of hand crafted features with physical meaning is usually clearly higher.
Also, this thread might be relevant for you:
Best regards
Christian