Unlocking Random Forest in Machine Learning
 With Python sklearn example

Lianne & Justin

Lianne & Justin

random forest machine learning
Source: Burst

In this tutorial, we’ll explain the random forest algorithm in machine learning.

Random forests are powerful, popular, and easy to use algorithms for predictive modeling. As the name suggests, the model is an ensemble of many decision trees, with better performance than an individual tree alone. The algorithm can be used for both supervised classification and regression problems.

Following this tutorial, you’ll learn:

  • What are ensembling and bagging?
  • What is a random forest in machine learning?
  • How to apply random forests using Python sklearn?
  • And more!

To make your decision tree more powerful, let’s explore the forest!


Before beginning our random forest tutorial:



Ensemble Decision Trees to a Random Forest

Random Forest is an ensemble of de-correlated decision trees based on bagging. It produces more accurate predictions than single decision trees.

  • What is an ensemble in machine learning?
  • What is bagging?
  • Why is it called the “random” forest?

Before learning details about random forests, let’s look at these basic questions, one-by-one.

What is Ensemble Learning?

Ensembling is a technique to build a predictive model by combining the strengths of multiple simpler base models. The goal of the method is to produce better predictions than any of the individual models alone.

So the ensembling process involves developing a group of base models from the training data, and then ensembling them in a certain way. We can either:

  • build the base models independently and then averaging their results to get the final prediction.
    In this case, the ensembled predictor often has better predictions because of reduced variance.
    or,
  • build the final model sequentially that evolves.
    In this case, the ensembled predictor usually produces better predictions because of less bias.

We won’t expand details on all the possible methodologies. But random forests use the averaging ensembling method mentioned above, or, more specifically, the Bootstrap AGGregatING (bagging) method. And it’s often the first ensembling algorithm machine learning beginners master.

Next, let’s dig more into bagging.

What is Bagging?

Bagging consists of repeatedly taking bootstrap samples (random sampling with replacement) of the training dataset and using them to fit machine learning models.

The aggregated prediction from bagging is the average (regression) or majority votes (classification) of the predictions of each of the models that were trained.

Let’s see an example.

For a training dataset D with 6 data points [0, 1, 2, 3, 4, 5], we can draw three random samples with replacement of size 6:

  • [0, 1, 2, 4, 4, 5]
  • [1, 2, 3, 3, 5, 5]
  • [0, 1, 3, 5, 5, 5]

Then we can fit three models using these random samples.

For a new observation, we use all three models to make predictions. If this is:

  • a regression problem and the three models give predictions of 5.2, 7.7, and 6.9.
    The final prediction of the bagging method would be their average, i.e., (5.2 + 7.7 + 6.9)/3.
  • a classification problem with target 1 or 0 and two out of the three models predict 1.
    We would conclude the prediction is 1 based on bagging.

Bagging improves the prediction, especially well when the base models have low-bias and high-variance. In this situation, the average of the predictions would still be low-biased, while the variance of the errors from the aggregated model would be less than each individual model.

This is exactly the case for the random forest.

Because the decision trees, when grown to sufficient depth, tend to have low bias but high variance (i.e., the overfitting problems). The random forest, as a bagging model of trees, has better predictions with low-bias and less variance than each tree.

Related Reading: Decision Tree Model in Machine Learning: Practical Tutorial with Python.

Two sources of Randomness

So far, we’ve seen the first main source of randomness in the forest, which comes from the bagging method (random sampling).

To further lower the variance of the random forest, extra “randomness” is introduced as well. Instead of using the whole set of features to fit decision trees, random forest algorithm randomly selects a subset of features when splitting for the trees.

This randomness helps make the trees less correlated and more diverse, so that they can cancel out each other’s errors. So the final aggregated model can predict better with lower variance.

Further Reading: If you are interested in learning more about how the variance is calculated, check out formula (15.1) in The Elements of Statistical Learning.

That’s a lot of explanation!

Now we are ready to piece this information together.

What is Random Forest in Machine Learning?

To summarize, we can build random forests based on the general procedures below.

Step #1: From the training dataset of N observations and M features, draw a random sample with replacement of size n (n <= N).

Step #2: Grow a decision tree using the sample by:

  • randomly select m (m <= M) features from the M features as candidates for splitting at each node.
  • pick the best split based on these m features.

Until the stopping conditions (if there are any) are met.

Step #3: Repeat the previous two steps and grow many different decision trees to form a forest.

Now we have a random forest, which is an ensemble of trees!

Step #4: To predict a new observation, we use:

  • the average of the prediction from all the trees for regression problems.
  • the majority votes of class from the trees for classification problems.

With the theoretical understanding, it’s time to apply the random forest with Python.

Python Example with sklearn

Build Random Forest

In this last section, we’ll fit both a decision tree and a random forest using Python scikit-learn (sklearn), and compare their results.

First, we import the necessary libraries for model building, dataset, and model evaluation.

We’ll use the breast cancer dataset with a binary target (benign or malignant).

Next, we split the dataset into training and test datasets.

We can fit a decision tree using the training dataset and calculate its confusion matrix using the test dataset.

decision tree python sklearn machine learning confusion matrix

We can also fit a random forest and print out its confusion matrix in the same way. In this example, we set the forest to contain 500 trees, but you may tune this hyperparameter to find its optimal value.

random forest python sklearn machine learning confusion matrix

As you can see, the random forest predicts better than the decision tree in terms of both True Positive and True Negative.

Further Reading: If you are not familiar with the evaluation matrix, read 8 popular Evaluation Metrics for Machine Learning Models.

Let’s also look at AUC and the ROC curve.

We can see the improvement of random forest compared to decision tree as well.

random forest vs decision tree machine learning roc curve auc

With that said, we can easily visualize the decision tree and interpret its results, but we can’t do it for random forest due to the increased complexity.

Feature Importance/Selection

Besides predicting, the random forest is also useful to rank the importance of features.

sklearn provides the impurity-based feature importances calculation based on random forest. The calculation is the (normalized) total reduction of the impurity criterion by the feature. The higher the importance, the more important the variable.

From the chart below, we can see that the features are ranked by the importances from higher to lower.

One drawback of this method is that the high cardinality categorical feature with many unique categories could produce misleading results. Because random forests (or decision trees) are biased in favor of these variables with more levels.

random forest machine learning feature importance python sklearn

Another way of calculating feature importance is by using the permutation_importance, which can solve the high cardinality categorical variable problem. But it requires adjustments when the features are highly correlated, which is the case here, so we are not covering it in this tutorial.

That’s it!


To summarize, you’ve learned what is and how to use random forest for machine learning modeling.

Try to apply it to your next data science project!

Leave a comment for any questions you may have or anything else.

Twitter
LinkedIn
Facebook
Email
Lianne & Justin

Lianne & Justin

Leave a Comment

Your email address will not be published. Required fields are marked *

More recent articles

Scroll to Top

Learn Python for Data Analysis

with a practical online course

lectures + projects

based on real-world datasets

We use cookies to ensure you get the best experience on our website.  Learn more.