Hi,
Is there any good rule of thumb regarding the number of training examples needed to “start considering” Deep Learning as a solution?
From the lecture, I understand that it is hard to generalize an answer to this question and that it is relative to the complexity of the problem. But I was wondering if we have a number for simple tasks.
I think it has more to do with the data and what label or type of learning you want to achieve rather than the number of training examples. Deep learning can come with very complex non-linear boundary decisions so it can be a solution to many problems but not always the best solution in terms of complexity and computation power used.
I am not a practioner of the art and all I know is what I’ve heard Prof Ng say in these courses. Based on all that info, my interpretation is that there are no “rules of thumb” which are really very useful here. The best you can say is “The more complex the problem, the more data you probably need in order to get a good solution”. Nothing concrete. You’ll only know when you try and see what works. Of course obtaining data is potentially expensive and complicated, so there has been a lot of research on how to get the most out of limited data. I don’t have a concrete example at hand, but I remember Prof Ng talking about projects like that several times over the past couple of years in The Batch newsletter. You might have a look through the archives there and maybe find some of this comments on the subject.