There is some truth to that statement, e.g. when it comes to tree-based models: you can think of a random forest as a collection of many if-else statements.
I would suggest to take a look at this thread here:
Machine Learning (ML) is a field or subclass of AI which teaches machines / computers to learn from data w/o being explicitly programmed (like rules or so). That has powerful applications if you think of platforms with network effects where data emerge because these data helps that new knowledge and information can be incorporated into the ML model (by training) which often means that the ML model is getting more powerful over time.
To stay in our logic: the algorithm helps the model to learn the right thresholds and rules for the if else statements in the random forest. Of course there are also more avcanced ML models like gaussian processes that are not just a collection of if-else statements but can also utilize prior information etc. for probabilistic modeling!
Deep Learning models with advanced architectures (like transformers but also architecture w/ convolutional | pooling layers) are designed to perform well on very large datasets and also process highly unstructured data like pictures or videos in a scalable way: basically the more data, the merrier!
Compared to classic ML models DNNs possess less structure and can learn more complex and abstract relationships given sufficient amount of data. However if you have domain knowledge that you can model in a handful of features, classic ML can be very powerful, too. Especially if you have rather a limited amount of data (which typically can be represented in structured tables), see also this thread: Why traditional Machine Learning algorithms fail to produce accurate predictions compared to Deep Learning algorithms? - #2 by Christian_Simonis
Hope that helps, @Anu_Singh!
Best regards
Christian