Open In App

Random Forest Classifier using Scikit-learn

Last Updated : 11 Mar, 2025
Summarize
Comments
Improve
Suggest changes
Like Article
Like
Share
Report
News Follow

Random Forest is a method that combines the predictions of multiple decision trees to produce a more accurate and stable result. It can be used for both classification and regression tasks.

In classification tasks, Random Forest Classification predicts categorical outcomes based on the input data. It uses multiple decision trees and outputs the label that has the maximum votes among all the individual tree predictions and in this article we will learn more about it.

Working of Random Forest Classifier

Random Forest Classification works by creating multiple decision trees each trained on a random subset of data. The process begins with Bootstrap Sampling where random rows of data are selected with replacement to form different training datasets for each tree.

Then where only a random subset of features is used to build each tree ensuring diversity across the models.

During the training phase Feature Sampling is applied to each tree built by recursively partitioning the data based on the features. At each split the algorithm selects the best feature from the random subset optimizing for information gain or Gini impurity. The process continues until a predefined stopping criterion is met such as reaching maximum depth or having a minimum number of samples in each leaf node. After the trees are trained each tree makes a prediction. The final prediction for classification tasks is determined by majority voting.

random

Random Forest Classifier

Benefits of Random Forest Classification:

  • Random Forest can handle large datasets and high-dimensional data.
  • By combining predictions from many decision trees it reduces the risk of overfitting compared to a single decision tree.
  • It is robust to noisy data and works well with categorical data.

Implementing Random Forest Classification in Python

Before implementing random forest classifier in Python let’s first understand it’s parameters.

  • n_estimators: Number of trees in the forest.
  • max_depth: Maximum depth of each tree.
  • max_features: Number of features considered for splitting at each node.
  • criterion: Function used to measure split quality (‘gini’ or ‘entropy’).
  • min_samples_split: Minimum samples required to split a node.
  • min_samples_leaf: Minimum samples required to be at a leaf node.
  • bootstrap: Whether to use bootstrap sampling when building trees (True or False).

Now that we know it’s parameters we can start building it in python.

1. Import Required Libraries

We will be importing Pandas, matplotlib, seaborn and sklearn to build the model.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

2. Import Dataset

For this we’ll use the Iris Dataset which is available within sklearn. This dataset contains information about three types of Iris flowers and their respective features (sepal length, sepal width, petal length and petal width).

iris = load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
df['target'] = iris.target

df

Output:

Screenshot-2025-03-03-163706

3. Data Preparation

Here we will separate the features (X) and the target variable (y).

X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values

4. Splitting the Dataset

We’ll split the dataset into training and testing sets so we can train the model on one part and evaluate it on another.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

5. Feature Scaling

Feature scaling ensures that all the features are on a similar scale which is important for some machine learning models. However Random Forest is not highly sensitive to feature scaling. But it is a good practice to scale when combining models.

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

6. Building Random Forest Classifier

We will create the Random Forest Classifier model, train it on the training data and make predictions on the test data.

classifier = RandomForestClassifier(n_estimators=100, random_state=42)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)

7. Evaluation of the Model

We will evaluate the model using the accuracy score and confusion matrix.

accuracy = accuracy_score(y_test, y_pred)
print(f'Accuracy: {accuracy * 100:.2f}%')

conf_matrix = confusion_matrix(y_test, y_pred)

plt.figure(figsize=(8, 6))
sns.heatmap(conf_matrix, annot=True, fmt='g', cmap='Blues', cbar=False, 
            xticklabels=iris.target_names, yticklabels=iris.target_names)

plt.title('Confusion Matrix Heatmap')
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.show()

Output:

Accuracy: 100.00%

download

Confusion Matrix

Its perfect accuracy along with the confusion matrix shown by Random Forest Classifier has learned to classify all the instances correctly. However it’s essential to note that the Iris dataset used here is relatively simple and well-known in the machine learning

8. Feature Importance

Random Forest Classifiers also provide insight into which features were the most important in making predictions. We can plot the feature importance.

feature_importances = classifier.feature_importances_

plt.barh(iris.feature_names, feature_importances)
plt.xlabel('Feature Importance')
plt.title('Feature Importance in Random Forest Classifier')
plt.show()

Output:

download-

From the graph we can see that petal width (cm) is the most important feature followed closely by petal length (cm). The sepal width (cm) and sepal length (cm) have lower importance in determining the model’s predictions. This indicates that the classifier relies more on the petal measurements to make predictions about the flower species.

Random Forest Classifiers are useful for classification tasks offering high accuracy and robustness. They are easy to use, provide insights into feature importance and can handle complex datasets.

Random Forest can also be used for regression problem: Random Forest Regression in Python

Frequently Asked Questions(FAQs)

What is the random forest classifier?

Random Forest Classifier is an ensemble learning method using multiple decision trees for classification tasks, improving accuracy. It excels in handling complex data, mitigating overfitting, and providing robust predictions with feature importance.

Can random forest be used for regression ?

Random Forest can be used for both regression and classification tasks, making it a versatile machine learning algorithm.

What is the principle of random forest?

Random Forest builds multiple decision trees using random subsets of the dataset and combines their outputs for improved accuracy.



Similar Reads

three90RightbarBannerImg