Is orthogonalization for all ML projects, or just deep learning projects?

This course is titled “Structuring Machine Learning Projects”. And one of the first thing we learn is orthogonalization. But I wonder orthogonalization applies to to all machine learning projects or just deep learning projects.

On the one hand, the concept of orthogonalization seems quite general: It is easier to optimize a model when there are different sets of actions we can do to change different aspects of the model, and the affect of these actions do not interfere.

On the other hand, I remember Andrew saying one of the characteristics that sets deep learning apart from other ML methods is that deep learning can avoid bias-variance tradeoff (was it through training on a bigger network and using more data?). If we are using ML methods that cannot avoid bias-variance tradeoff, then we can’t just improve bias or variance, so orthogonalization would not be possible.

To my understanding, we may see orthogonalization as just a way to reduce complexity meaning it may be useful not only in traditional ML and deep learning, but everywhere you can separate a cause and an effect.

1 Like