Welcome to the community. First of all, let me highlight some things in your aforementioned points.
Neural networks can also avoid high variance with small datasets, by using “Transfer Learning”. I don’t recall it being discussed in the course, but just for your information, I thought to mention it. You can read more about it here.
I don’t think I would say so. The networks that have been discussed in the course are fairly small in size, and discuss only a small number of different NN architectures (I believe the standard NN and intro to CNN), and hence, only a few hyper-parameter choices, which may make it seem like an easy task, but in reality, when we are trying a large (and perhaps new) NN architecture on a large dataset, the hyper-parameter choices can be huge, and in that case, it becomes a really difficult task.
That’s it for your points. Now, let’s see why one may consider a traditional ML model rather than a NN:
- The first reason is training periods. As the NNs scale up, their training period also increases, and for many tasks, such training periods can’t be accepted, due to reasons such as; excessive costs of training on cloud; the models have to be trained periodically to avoid data drift, etc. In such scenarios, a traditional ML model might be a more suitable choice.
- The second reason is as you mentioned inference time. Many traditional ML models offer faster inference than NNs, at the cost of some accuracy.
- The third reason is storage requirements. As the size of the NNs grow, so does the storage required to store a NN for inference purposes. Now, this is a clear disadvantage if we want on use a model on edge devices, without using the cloud, due to reasons such as connectivity, privacy, etc.
I guess a clear trend that can be seen is that when the data involved is unstructured (such as images, videos, text, geospatial, etc), in that case use of NNs is much more common, and they have done wonders, indeed. But when the data is structured (primarily tabular), in that case use of traditional ML models can be clearly observed. And that’s perhaps one of the major reasons, why a typical Data Scientist’s routine involves much more usage of traditional ML models, than deep learning based models, since a majority of them deal with tabular data.
But at the same time, I would also like to state that active research has been going on to improve NNs in all of the aforementioned aspects, and there has been some amazing research in the last decade, that improves the NNs to a great extent in all of these aspects.
I hope this helps.