What is the difference between Decision Trees, Bagging and Random Forest?

A decision tree serves as the building block of most bagging and boosting algorithms and is always built using the concept of maximizing information. Bagging, and specifically Random Forest, provides a mechanism for constructing an ensemble of decision trees, which creates a prediction that results from the aggregation of all trees in the ensemble. Random Forest is a specific example of a bagging method that creates each decision tree using a bootstrap sample of the original data and then performs aggregation to determine a final prediction for each observation. 

Author

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute