As per the course, Traditional AI and Modern AI differs in terms of using Artificial Neural Networks. I would like to know is this the only difference? The course explained that Articificial Intelligence as of today comprises of the tools which are related to Machine learning, Deep Learning or NeuralNet and intersects with Data Science tools. So, what were the tools that contributed to Traditional AI? Basis my limited search I was able to read that it was more of if-then else fuzzy logic .
often things are not absolutely black or white but let me try to answer your question:
Classic ML tooling often involves libraries like scikitlearn (Python library) or Julia for physics-informed AI, see also this thread: https://community.deeplearning.ai/t/non-linear-regression/313343/3
I also agree with you that some classic control-theoretic models like oberserver approaches (Luenberger observer, kalman filters, particle filters) and also fuzzy logic are often associated with classic AI as well as recommender systems. Especially the control theoretic modeling resp. robotics also can involve or originates from Matlab/Simulink or C/C++.
Deep learning frameworks are often Python-based and leverage libraries like keras, Jax, PyTorch etc. and big data. Often, if this big data is available, this can be combined with cloud-native approaches to manage this big data well without having to invest in infrastructure (CapEx) upfront too much. For example databricks is also a popular framework to utilize Spark for distributed data processing.
Here you can find a thread that could be interesting for you: https://community.deeplearning.ai/t/deep-learning-is-a-small-part-of-ai/163383/6
So in conclusion, you have basically two approaches:
- classic ML: hand crafting your features with your domain knowledge. This works often well with limited data and if you have domain knowledge that you can put into < 20 features, see also: W1_Quiz_Large NN Models vs Traditional Learning - #4 by Christian_Simonis
- using deep learning which kind of automates feature engineering using lots of data. This often suits with big and unstructured data, see also this thread. With these tons of data the model can learn abstract patterns: Deep Learning models with advanced architectures (like transformers leveraging multi head attention) are designed to perform well on very large datasets and also process highly unstructured data like text / natural language, pictures or videos in a scalable way: basically the more data, the merrier! Compared to classic ML models DNNs possess less structure and can learn more complex and abstract relationships given sufficient amount of data, see also this thread: Why traditional Machine Learning algorithms fail to produce accurate predictions compared to Deep Learning algorithms? - #2 by Christian_Simonis
Both approaches are possible with neural networks in general, but the architectures as well as amount data and the number of learnable parameters are completely different.
Hope that helps!