Group7 - Decision Tree Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

BUSINESS STATISTICS

MBAIBCC107

DECISION TREE ANALYSIS

GROUP 7
Ankur Mittal
Nishant Bhandari
Richardson Debbarama
Shivanshu Srivastava
Sholack Stanly
Shubham Jain
Shubham Saini
DECISION TREE IN DECISION ANALYSIS

A decision tree can be used as a model for a sequential decision problem under uncertainty.
A decision tree describes graphically the decisions to be made, the events that may occur,
and the outcomes associated with combinations of decisions and events. Probabilities are
assigned to the events, and values are determined for each outcome. A major goal of the
analysis is to determine the best decisions.
DECISION TREE STRUCTURE
Decision tree models include such concepts as nodes, branches, terminal values, strategy,
payoff distribution, certain equivalent, and the rollback method.
Nodes and Branches Decision trees have three kinds of nodes and two kinds of branches. A
decision node is a point where a choice must be made; it is shown as a square. The branches
extending from a decision node are decision branches, each branch representing one of the
possible alternatives or courses of action available at that point. The set of alternatives must
be mutually exclusive (if one is chosen, the others cannot be chosen) and collectively
exhaustive (all possible alternatives must be included in the set). There are two major
decisions in the problem. First, the company must decide whether or not to prepare a
proposal. Second, if it prepares a proposal and is awarded the contract, it must decide which
of the three approaches to try to satisfy the contract. An event node is a point where
uncertainty is resolved (a point where the decision maker learns about the occurrence of an
event). An event node, sometimes called a "chance node," is shown as a circle. The event set
consists of the event branches extending from an event node, each branch representing one
of the possible events that may occur at that point. The set of events must be mutually
exclusive (if one occurs, the others cannot occur) and collectively exhaustive (all possible
events must be included in the set). Each event is assigned a subjective probability; the sum
of probabilities for the events in a set must equal one.
In general, decision nodes and branches represent the controllable factors in a decision
problem; event nodes and branches represent uncontrollable factors. Decision nodes and
event nodes are arranged in order of subjective chronology. For example, the position of an
event node corresponds to the time when the decision maker learns the outcome of the event
(not necessarily when the event occurs). The third kind of node is a terminal node,
representing the final result of a combination of decisions and events. Terminal nodes are the
endpoints of a decision tree, shown as the end of a branch on hand-drawn diagrams and as a
triangle or vertical bar on computer-generated diagrams.
For representing a sequential decision problem, the tree diagram is usually better than the
written description. In some decision problems, the choice may be obvious by looking at the
diagram. That is, the decision maker may know enough about the desirability of the outcomes
(endpoints in the tree) and how likely they are. But usually the next step in the analysis after
documenting the structure is to assign values to the endpoints.

DECISION TREE TERMINAL VALUES: Each terminal node has an associated


terminal value, sometimes called a payoff value, outcome value, or endpoint value. Each
terminal value measures the result of a scenario: the sequence of decisions and events on a
unique path leading from the initial decision node to a specific terminal node. To determine
the terminal value, one approach assigns a cash flow value to each decision branch and event
branch and then sums the cash flow values on the branches leading to a terminal node to
determine the terminal value. Some problems require a more elaborate value model to
determine the terminal values.

ROLLBACK METHOD: If we have a method for determining certain equivalents


(expected values for a risk neutral decision maker), we don't need to examine every possible
strategy explicitly. Instead, the method known as rollback determines the single best strategy.
The rollback algorithm, sometimes called backward induction or "average out and fold back,"
starts at the terminal nodes of the tree and works backward to the initial decision node,
determining the certain equivalent rollback values for each node. Rollback values are
determined as follows:
• At a terminal node, the rollback value equals the terminal value.
• At an event node, the rollback value for a risk neutral decision maker is determined
using expected value (probability-weighted average); the branch probability is
multiplied times the successor rollback value, and the products are summed.
• At a decision node, the rollback value is set equal to the highest rollback value on the
immediate successor nodes. In TreePlan tree diagrams the rollback values are located
to the left and below each decision, event, and terminal node.

Types of Decisions:
There are two main types of decision trees that are based on the target variable, i.e.,
categorical variable decision trees and continuous variable decision trees.
1. Categorical variable decision tree
A categorical variable decision tree includes categorical target variables that are divided into
categories. For example, the categories can be yes or no. The categories mean that every
stage of the decision process falls into one category, and there are no in-betweens.
2. Continuous variable decision tree
A continuous variable decision tree is a decision tree with a continuous target variable. For
example, the income of an individual whose income is unknown can be predicted based on
available information such as their occupation, age, and other continuous variables.

How to draw decision making tree:


• You start a Decision Tree with a decision that you need to make.
• Draw a small square to represent this towards the left
of a large piece of paper.
• From this box draw out lines towards the right for each possible solution, and write
that solution along the line.
• At the end of each line, consider the results. If the result of taking that decision is
uncertain, draw a small circle. If the result is another decision that you need to make, draw
another square. Write the decision or factor above the square or circle. If you have completed
the solution at the end of the line, just leave it blank.
• Keep on doing this until you have drawn out as many of the possible outcomes and
decisions as you can see leading on from the original decisions.
Applications of Decision Trees:
1. Assessing prospective growth opportunities
One of the applications of decision trees involves evaluating prospective growth opportunities
for businesses based on historical data. Historical data on sales can be used in decision trees
that may lead to making radical changes in the strategy of a business to help aid expansion
and growth.
2. Using demographic data to find prospective clients
Another application of decision tree is in the use of demographic data to find prospective
clients. They can help streamline a marketing budget and make informed decisions on the
target market that the business is focused on. In the absence of decision trees, the business
may spend its marketing market without a specific demographic in mind, which will affect its
overall revenues.
3. Serving as a support tool in several fields
Lenders also use decision trees to predict the probability of a customer defaulting on a loan
by applying predictive model generation using the client’s past data. The use of a decision
tree support tool can help lenders evaluate a customer’s creditworthiness to prevent losses.

Decision trees can also be used in operations research in planning logistics and strategic
management. They can help in determining appropriate strategies that will help a company
achieve its intended goals. Other fields where decision trees can be applied include
engineering, education, law, business, healthcare, and finance.

Advantages of Decision Trees:


1. Easy to read and interpret
One of the advantages of decision trees is that their outputs are easy to read and interpret
without requiring statistical knowledge. For example, when using decision trees to present
demographic information on customers, the marketing department staff can read and
interpret the graphical representation of the data without requiring statistical knowledge.

The data can also generate important insights on the probabilities, costs, and alternatives to
various strategies formulated by the marketing department.
2. Easy to prepare
Compared to other decision techniques, decision trees take less effort for data preparation.
However, users need to have ready information to create new variables with the power to
predict the target variable. They can also create classifications of data without having to
compute complex calculations. For complex situations, users can combine decision trees with
other methods.
3. Less data cleaning required
Another advantage of decision tree is that there is less data cleaning required once the
variables have been created. Cases of missing values and outliers have less significance on the
decision tree’s data.

Disadvantages of Decision Trees:


1. Unstable nature
One of the limitations of decision trees is that they are largely unstable compared to other
decision predictors. A small change in the data can result in a major change in the structure
of the decision tree, which can convey a different result from what users will get in a normal
event. The resulting change in the outcome can be managed by machine learning algorithms,
such as boosting and bagging.
2. Less effective in predicting the outcome of a continuous variable
In addition, decision trees are less effective in making predictions when the main goal is to
predict the outcome of a continuous variable. This is because decision trees tend to lose
information when categorizing variables into multiple categories.

Example - Decision Trees Analysis


ABC Ltd. is a company manufacturing skincare products. It was found that the business is at
the maturity stage, demanding some change. After rigorous research, management came up
with the following decision tree
In the above decision tree, we can easily make out that the company can expand its existing
unit or innovate a new product, i.e., shower gel or make no changes.

Given below is the evaluation of each of these alternatives:

Expansion of Business Unit:

If the company invests in the development of its business unit, there can be two
possibilities, i.e.:

• 40% possibility that the market share will hike, increasing the overall profitability of
the company by ₹2500000;
• 60% possibility that the competitors would take over the market share and the
company may incur a loss of ₹800000.

To find out the viability of this option, let us compute its EMV (Expected Monetary Value)

New Product Line of Shower Gel:

If the organization go for new product development, there can be following two
possibilities:

• 50% chances are that the project would be successful and yield ₹1800000 as profit;
• 50% possibility of failure persists, leading to a loss of ₹800000.

To determine the profitability of this idea, let us evaluate its EMV:

Do Nothing:

If the company does not take any step, still there can be two outcomes, discussed below:
• 40% chances are there that yet, the organization can attract new customers,
generating a profit of ₹1000000.
• 60% chances of failure are there due to the new competitors, incurring a loss of
₹400000.

Given below is the EMV in such circumstances

Interpretation:

From the above evaluation, we can easily make out that the option of a new product line
has the highest EMV. Therefore, we can say that the company can avail this opportunity to
make the highest gain by ensuring the best possible use of its resources.

You might also like