I just completed week 4 specifically regarding decision trees and appreciate the material. One question I have is how are decision trees a good choice for regression problems? I can see how they perform well for classification problems but I just don’t understand how a decision tree would come down to a floating point value prediction if you have a regression problem? Thank you.
As far as i know is classification and tabular data!
Here is an example:
Thanks Tom I appreciate the link. Is it fair to say that neural networks are better for regression problems in general?
General rules are seldom reliable. It depends on what you mean by “better”.
I missed the “regression” word here, thanks for clarifying!
Personally, I have had really good experience with using decision trees for regression tasks, especially with bagged ones, in particular random forests. After all, they are quite well to interpret. Explainability is also good and with structured data (which is not super huge) and a limited amount of features, in which you can model your domain knowledge, they are a really good alternative or benchmark to Gaussian Processes in my opinion.
Also boosted trees work also often well.
I believe these threads might be relevant:
- Can decision tree algo used for regression? - #2 by Christian_Simonis
- Regression Trees ensemble - #3 by Christian_Simonis
Happy learning!
Best regards
Christian
Nope, I do not agree with that statement in general, see also these sources, where I explain why:
The tendency in my opinion is:
- the bigger the data
- the more unstructured the data (videos, images)
- the less domain knowledge you can encode in features
- the more freedom or capacity your model needs to abstract really complex patterns
→ the stronger the benefits of deep neural networks with modern architectures and advanced layers are compared to traditional or classic ML. Also, when training deep neural networks you can leverage modern digital infrastructure like GPU clusters to accelerate the training.
Basically you have two approaches:
- hand crafting your features with your domain knowledge. This works often well with classic ML and < 20 features, see also: W1_Quiz_Large NN Models vs Traditional Learning - #4 by Christian_Simonis
- using deep learning which kind of automates feature engineering using lots of data. This often suits with big and unstructured data, see also this thread. With these tons of data the model can learn abstract patterns: Deep Learning models with advanced architectures (like transformers but also architecture w/ convolutional | pooling layers) are designed to perform well on very large datasets and also process highly unstructured data like pictures or videos in a scalable way: basically the more data, the merrier! Compared to classic ML models DNNs possess less structure and can learn more complex and abstract relationships given sufficient amount of data, see also this thread: Why traditional Machine Learning algorithms fail to produce accurate predictions compared to Deep Learning algorithms? - #2 by Christian_Simonis
Hope that helps, @Amit_Misra1. All the best!
Best regards
Christian