About the tree ensembles algorithm

In week 4, Andrew introduces the tree ensembles algorithm as follows:
Given training set of size π‘š
For 𝑏 = 1 to 𝐡:
Use sampling with replacement to create a new training set of size π‘š
Train a decision tree on the new dataset

May I ask that in practice, we should restrict B to be an odd number, right? Otherwise, if the number of graph is even, there would be ties in voting.

Please tell me my intuition is right or wrong?

Hey @empheart,

Your intuition is on the right track, but the common practice is actually to use an odd number of trees in an ensemble just to break ties in the voting process when making predictions.

When using an odd number of trees, ties are less likely to occur, as one class will have a majority. For example, if you have three trees and they vote (Class A, Class B, Class A), then the majority is Class A. However, if you have an even number of trees, such as four, and they vote (Class A, Class B, Class A, Class B), then there’s a tie, and it becomes necessary to have additional mechanisms (like weighted voting or other tie-breaking strategies) to handle such situations.


Hello @empheart,

That is an interesting thought! Some decision tree packages may implement in that way, but in many commonly-used packages, the votes that it accumulate from the trees are not integral, but fractional. Instead of counting how many trees vote for β€œyes”, it takes average of a fractional, probability-like value from each tree. The averaged value is then used to make the final prediction of yes or no. In this case, there won’t be a tie easily.