New Year Offer: Pay for 1 & Get 3 Months of Unlimited Class Access

- Data Science Blogs -

Random Forest: An Easy Explanation of the Forest

With the advent of the 21st century, human civilization has seen exponential growth in terms of computational resources at its disposal. This increased resource has allowed human civilization to perform extremely computational tasks with relative ease. One of the algorithms in need of high-end computational resources is “Random Forest”, which is being discussed in this blog. 

Random forest is an evolved version of decision trees and is used to perform classification as well as regression. Random forest is a supervised learning-based algorithm that employs an extremely specialized type of learning to fall under supervised learning known as ensemble learning.

Random forest explanation

Defining Ensemble learning:

In the domain of supervised learning, ensemble learning basically is a method of using multiple learning algorithms to obtain a better result as compared to a single algorithm. Thus, instead of training a single algorithm and then use a poll based design to conclude the final output. 

As it is well known thata supervised learning-based algorithm performs their task by searching through their hypothesis domain to find a suitable hypothesis for the input. Now, the ensemble model actually tries to create multiple hypotheses and solves them, thus, theoretically making a model that is better suited for a problem as compared to a single hypothesis solution. This assumption holds good for cases where it is not possible to construct a single hypothesis for the problem under consideration.

The ensemble models also use a greater amount of computational resources as compared to a single model. For consideration of poor learning in single hypothesis-based learning which is compensated by the ensemble, it considerably increases the resource requirement for the final output.

Revisiting Decision Trees:

Read: Introduction to Regression Analysis & Its Approaches

Decision trees are the building blocks of random forest. A decision tree looks like a tree-like graph with each node belonging to the decision taken. These types of designs are extremely helpful when inherits of any algorithm only contain control-statements. 

In other words, a decision tree happens to be a flowchart like design whose internal nodes represent a condition which decides the next step taken by the machine. Each branch coming out of the node defines a particular path taken by the machine till the time it reaches a leaf node which depicts the final result of the query. The steps taken from the root to the leaf depict all the rules.

The tree-based design of any working model is considered one of the best as it can be visualized with ease and is mostly used in supervised learning. The tree-based design of any model provides high accuracy and ease of interpretation to any of the predictive design. As compared to linear-models, these models map non-linearity in the inherited design very well. These models can be used to solve classification as well as regression problems with ease.

Further details can be found at the Janbask decision tree blog.

The random forest classifier:

Just as a forest comprises a number of trees, similarly, a random forest comprises a number of decision trees addressing a problem belonging to classification or regression. Since a random forest comprises a number of decision trees, this makes it an ensemble of models. Every entity of the forest i.e. the decision tree splits of its own class prediction and these class prediction are then put to a vote. The class with the most votes becomes the final output of the random forest.

Visualization of a Random Forest Model

Fig. 1 Visualization of a Random Forest Model Making a Prediction

The inherited concept which makes random forests so powerful is quite simple. This is known as the Wisdom of the crowds. In the technical language of data science, the reason is stated as:

Read: What is Neural Network in Data Science?

“A large number of models working on an uncorrelated hypothesis to solve the performance will outperform any of the individual models under consideration.”

The key in the random forest remains as a low correlation between the hypothesis which is being solved. It's The same as the low correlation shown in yields of stocks and bonds, that are used to make a portfolio which is greater than the sum of its parts. Similarly, uncorrelated models produce ensemble predictions which happen to be more precise than any of the individual prediction. This happens due to the reason that few trees might be producing error but once the output of all the trees is put to a vote, the error is negated by the output of the majority of the trees. Thus, the following make the prerequisite for the random forest to work:

a. There should be some indication in the features so that the models built using these features outperform the random guessing stuff. 
b. The correlation between the prediction as well as errors should be as low as possible.

Random forest in action:

Training a random forest is just like training a decision except for the fact that there happens to be more than one tree to be trained. Since there happens to be more than one tree. Thus, for this model to be trained, a random dataset will be generated from the dataset lib of the random classifier. 

The first step is to import the required libraries in the working memory:

from sklearn.ensemble import RandomForestClassifier  #imports the random forest algorithm
from sklearn.datasets import make_classification           #imports random classifier generator

Once, the libraries are imported into the working memory, the next step is to make a dataset if there is no one available with the user. So, the following command should do the wonder:

input, labels = make_classification(n_samples=1000, n_features=6,
n_informative=3, n_redundant=0,
random_state=0, shuffle=False)

The above command generates a feature space with 4 classes and a total of 1000 data points.

Now, let’s train the classifier:

Read: Logistic Regression is Easy to Understand
model = RandomForestClassifier(max_depth=3, random_state=0), labels)

Let's check for the importance of each feature in the forest generates:

Output: [0.02384207 0.95184049 0.00756492 0.00214856 0.00845867 0.00614528]
The array shows the importance of each feature underconsideration.

Now, let’s query this model

print(model.predict([[0, 0, 0, 0]]))
output: 1
Thus, we get that as per the voting pattern the final output of forest is class label 1.
Note: The output can vary from one example to another as the dataset is randomly generated

Features of Random forest:

  • In terms of accuracy, this algorithm outperforms other single hypothesis based algorithms.
  • It is extremely efficient over large datasets.
  • Variable deletion is not required in a random forest-based model.
  • This model can provide the importance of a feature in the model being trained.
  • IT produces an internal estimate of the unbiasedness of the generalized error.
  • Random forest generates proximity between the input vector thus giving an insight view of the data.

Advantages and Disadvantages of a random forest:

The random forest has numerous advantages over single instance-based models:

  • The random forest can overcome the problem of overfitting of the data by putting to vote the results of different models in them.
  • Random forest outperforms a single decision tree when a large data set is provided.
  • In comparison to decision trees, random forests have less variance.
  • Random forest is extremely flexible and provides output with high certainty.
  • No scaling of the dataset is required.
  • It provides high accuracy even when a few data points are missing.

It’s not like that random forest has only advantages, it also suffers from few drawbacks like:

  • The random forest is very complex in nature.
  • Training of random is far more time consuming as compared to the decision tree.
  • The high amount of computational resource is required to build a random forest-based algorithm
  • Since numerous decision trees are trained in a random forest, the process to query the decision tree is quite time-consuming.


Random forest is a promising ensemble technique that utilizes power voting to generate a very powerful model. The random forest can be effectively utilized in places where the wisdom of the crowd plays a role like in stock markets. In this blog, the random forest algorithm has been discussed as a comparatively better tool for decision trees. A working example of the decision tree has also been provided. 

Please like and leave your comments in the comment section.

Read: A Detailed & Easy Explanation of Smoothing Methods

    Janbask Training

    A dynamic, highly professional, and a global online training course provider committed to propelling the next generation of technology learners with a whole new way of training experience.


Trending Courses


  • AWS & Fundamentals of Linux
  • Amazon Simple Storage Service
  • Elastic Compute Cloud
  • Databases Overview & Amazon Route 53

Upcoming Class

2 days 22 Jan 2021


  • Intro to DevOps
  • GIT and Maven
  • Jenkins & Ansible
  • Docker and Cloud Computing

Upcoming Class

3 days 23 Jan 2021

Data Science

  • Data Science Introduction
  • Hadoop and Spark Overview
  • Python & Intro to R Programming
  • Machine Learning

Upcoming Class

-0 day 20 Jan 2021


  • Architecture, HDFS & MapReduce
  • Unix Shell & Apache Pig Installation
  • HIVE Installation & User-Defined Functions
  • SQOOP & Hbase Installation

Upcoming Class

2 days 22 Jan 2021


  • Salesforce Configuration Introduction
  • Security & Automation Process
  • Sales & Service Cloud
  • Apex Programming, SOQL & SOSL

Upcoming Class

5 days 25 Jan 2021


  • Introduction and Software Testing
  • Software Test Life Cycle
  • Automation Testing and API Testing
  • Selenium framework development using Testing

Upcoming Class

2 days 22 Jan 2021

Business Analyst

  • BA & Stakeholders Overview
  • BPMN, Requirement Elicitation
  • BA Tools & Design Documents
  • Enterprise Analysis, Agile & Scrum

Upcoming Class

2 days 22 Jan 2021

MS SQL Server

  • Introduction & Database Query
  • Programming, Indexes & System Functions
  • SSIS Package Development Procedures
  • SSRS Report Design

Upcoming Class

2 days 22 Jan 2021


  • Features of Python
  • Python Editors and IDEs
  • Data types and Variables
  • Python File Operation

Upcoming Class

2 days 22 Jan 2021

Artificial Intelligence

  • Components of AI
  • Categories of Machine Learning
  • Recurrent Neural Networks
  • Recurrent Neural Networks

Upcoming Class

-0 day 20 Jan 2021

Machine Learning

  • Introduction to Machine Learning & Python
  • Machine Learning: Supervised Learning
  • Machine Learning: Unsupervised Learning

Upcoming Class

9 days 29 Jan 2021


  • Introduction to Tableau Desktop
  • Data Transformation Methods
  • Configuring tableau server
  • Integration with R & Hadoop

Upcoming Class

2 days 22 Jan 2021

Search Posts


Receive Latest Materials and Offers on Data Science Course