Decision Tree - Recursive Splitting

When creating a post, please add:

  • Week # must be added in the tags option of the post.

  • Link to the classroom item you are referring to:
    https://www.coursera.org/learn/advanced-learning-algorithms/lecture/a51O3/putting-it-together

  • Description :
    In the example “Ear shape” is taken as the root level node of the tree, and after that on the left side of the tree, “Face Shape” is taken as the next feature (the feature that has the higher information gain) .
    After that , we start with the right side of the tree and the first feature is “Whiskers” .
    Question : Why don’t we pick up “Face Shape” again instead of “Face Shape” ?

                Thanks, Kind regards 
    

Hi @Javier_Porras ,

Question : Why don’t we pick up “Face Shape” again instead of “Face Shape” ?
Not sure what your question is? Could you clarify

Based on the splitting if we take the same feature again for splitting it seems to be information gain will close to 0, which is not useful, on the other hand if we choose other features the information gain should increase