Research Leads for an Idea

So, in the last video of this course Andrew says that it’s better to let the algorithm learn its own features to better accomplish the task rather than making it confirm to some parameters set by the programmer like with phonemes. In this context, I was wondering if there’s been any instance where people have researched the intermediate features created by an algorithm to see if it could help or improve research in that domain.
For elaboration, let’s take the phoneme example. Have people or linguists working on speech detection created an end-to-end model for speech detection and tried to check the intermediate features created by the algorithm to check if they could be better used to describe speech/language than phenomes.
Looking forward to your replies and if you have any relatable material be sure to drop it down below. Thanks.

Yes, considerable research efforts have been made to investigate the intermediate features learned by algorithms in order to better understanding and research in a variety of disciplines, including voice and language processing.

While these approaches have demonstrated success in learning meaningful representations, deep neural network interpretability and understanding the learnt features remain active areas of research. To comprehend and analyze which components of the input data the models focus on throughout different tasks, techniques like as feature visualization, saliency maps, and attention processes are used. Understanding intermediate representations can bring useful insights and potentially lead to better models and applications.

But currently they are in progress and I didn’t come in handy to any algos or papers