Big data - threshold

Hi there,

In the week 1 lecture “Why neural network is taking off”, Andrew said that when we have a large data set (that is: when “m” is large) then larger NN perform better, and when data set is not so big then the ordering of methods is not known and an SVM might outperform a NN, for example.

My question is: what is a rough threshold for this “m”? When do you consider a data set “large”?

Thank you for your help.

I would say in the order of a couple thousands m is large.