Regression Trees leaf node output

Hi,

In Regression Trees, Professor mentioned that, while predicting the weight for new test examples, we perform the average of the examples that are there at the leaf node. This means that most of the new examples will tend to have the same weight which is the average of all the training examples at the leaf node right? But is it realistic to have such a technique as averaging?

Thanks

Averaging gives equal importance to all examples, so it avoids bias, so should be better than giving priority to one over others.

Yes, I understand that averaging helps to avoid bias. But according to this kind of prediction in regression trees, most of the new test examples will tend to have the same average weight right?

For example, we have 5 test examples
earshape faceshape
1 pointy round
2 floppy round
3 floppy round
4 pointy not round
5 pointy round

Now if we predict using this decision tree, then we could see that examples 2, 3 will have the same average weight as 17.7

And examples 1 and 5 will have the same average weight as 8.35.

But in the real world, this cannot be the case. Each cat or dog will not tend to have almost similar weights right?