Nope, I do not agree with that statement in general, see also these sources, where I explain why:
The tendency in my opinion is:
- the bigger the data
- the more unstructured the data (videos, images)
- the less domain knowledge you can encode in features
- the more freedom or capacity your model needs to abstract really complex patterns
→ the stronger the benefits of deep neural networks with modern architectures and advanced layers are compared to traditional or classic ML. Also, when training deep neural networks you can leverage modern digital infrastructure like GPU clusters to accelerate the training.
Basically you have two approaches:
- hand crafting your features with your domain knowledge. This works often well with classic ML and < 20 features, see also: W1_Quiz_Large NN Models vs Traditional Learning - #4 by Christian_Simonis
- using deep learning which kind of automates feature engineering using lots of data. This often suits with big and unstructured data, see also this thread. With these tons of data the model can learn abstract patterns: Deep Learning models with advanced architectures (like transformers but also architecture w/ convolutional | pooling layers) are designed to perform well on very large datasets and also process highly unstructured data like pictures or videos in a scalable way: basically the more data, the merrier! Compared to classic ML models DNNs possess less structure and can learn more complex and abstract relationships given sufficient amount of data, see also this thread: Why traditional Machine Learning algorithms fail to produce accurate predictions compared to Deep Learning algorithms? - #2 by Christian_Simonis
Hope that helps, @Amit_Misra1. All the best!
Best regards
Christian