In the video on “Why Deep Learning is Taking Off?,” there is a Performance vs. Labeled Data plot. Do the trends still hold today, or has very large NN trained on large data hit a plateau? What is the latest and greatest scoop on this topic?
Yes. More relevant data is always good for training a bigger neural network. The need for better models and availability of more data indicates that there is no plateau ay this point.
Progress is happening in 2 directions:
- Transfer learning: when you want to adapt an existing neutral network to your particular problem.
- Knowledge distillation: Building a smaller model using a larger model. This is useful when a model has to run on a device with limited capabilities (eg. mobile) but we need the performance of the metrics of interest to be good.
3 Likes