What caused Ml to be so successful in recent years

In this first week Andrew Ng presented a slide where he concluded that recent success in ML is due to

  • the large amounts of data that are available and

  • the size of the neural networks that can be programmed

I’d like to know if researchers were on the right track say 20 years ago, but didn’t get as much success simply because they couldn’t build big enough networks.
Or is it also due to some new understanding about Neural Networks?

As far as I know neural networks theory came in the 90s, so then it seems the data and computation power was the factor limiting it.

Hello @Bharat_Purohit Welcome to the DeepLearning.AI community. Thanks for your post.

If we say about the limiting factors at that point, then i think the amount and level of resources was not powerful enough at that time. Nowdays we have powerful GPUs that can carry out training of neural networks much faster. Moreover the reach of information and resources available over internet was also very less as compared to now. Data collection and documentation was also not very prevalent at that point. Moreover if you talk about new understanding then ML is a very fast growing field and everyday we see new updates where ML beats ML itself and this chain or cycle will go on.
Happy Learning !

Regards,
@Amit_Shukla

3 Likes

I’d like to add a bit to this thread.

AI had a lot of hype in the 1950s. The government had funds destined to grow AI. However, due to the lack of success (measured by the expectation at that time), funds were eliminated by 1970.

It was on 1998 that AI regained general interest when Yann Lecun came up with the idea of Convolutional Neural Networks and created the famous LeNet. This could be called the re-born of deep learning. And as Prof Ng mentioned, the explosion of data and computing power were instrumental in this re-born of AI.

3 Likes