How Deep learning is addressing the Bias Variance trade off but this not addressed in other machine learning model?
I’m not sure I understand the premise of the question. For starters, what do you mean by “other machine learning models”? There are other types of systems besides neural networks, e.g. anomaly detection systems, various types of unsupervised learning (e.g. clustering algorithms), reinforcement learning, recommender systems and so forth. There are a lot of different types of algorithms. In each of those cases, you also have hyperparameter and algorithm tuning choices to make and choices about how to select and prepare your training data that may have a big effect on how well your solution works.
There may be some things that are fairly specific to neural networks, e.g. the fact that the architecture of the network is so open to variations. You have complete freedom to choose the level of complexity of the function that you are learning. If you choose a model that is too complex, you can end up with variance (overfitting) problems. If you choose a model that is too simplistic, you will have bias (underfitting) problems. But there are probably analogous examples of bad results you can get with other algorithms as a result of poor choices for hyperparameters.