How does a decision tree create splits from continuous features?

The continuous variable is first sorted in ascending order of its values, and the midpoint between each pair of adjacent observations is calculated. The decision tree algorithm evaluates the chosen impurity measure (Entropy, Gini, etc.) after performing a candidate split using each midpoint as the threshold to determine which side of the split each observation will fall upon. It ultimately chooses the split that results in the lowest value for the chosen error metric among all possible splits that can be made using that feature. This process of discretization is a useful feature engineering technique for creating binned versions of continuous attributes and sometimes improves model performance.

Author

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute