What is Decision Tree and why is it important

Recent Blogs

Would it be a good idea for me to buy a Netflix subscription or Amazon Prime videos subscription? or Should I eat chicken or veggies? I should be watching Football on ESPN or WWE on Ten Sports?  We take the majority of the choices quickly and at times we require some time to get to pick which alternative we need to go dependent on our premonitions. This is alright. You can’t settle business choices dependent on your premonitions. Since one wrong choice can end your business in only one second. This is why management people need some decision-making tools to make the right decision at the right time. One of the most prominent decision-making tools is the Decision Tree.

WHAT’S IN IT

What is Decision Tree

According to medium.com, a decision tree is a tool that takes help from a tree-like diagram or model of decisions to reach the potential results, including chance event results, asset expenses, and utility. It is one approach to show an algorithm that just contains contingent control proclamations. 

A decision tree is a flowchart-like structure in which each internal node represent a “test” on quality (for example regardless of whether a coin flip comes up heads or tails), each branch speaks to the result of the test, and each leaf node speaks to a class name, i.e the decision was taken after computing all characteristics. The ways from the root to leaf speak to classification rules. 

Terms used in the studies of the Decision Tree

Root Nodes

It represents the whole population or test sample and this further gets separated into at least two homogeneous sets.

Splitting

It is a procedure of separating a node into at least two sub-nodes.

Decision Node

When a sub-node separates into further sub-nodes, at that point it is called the Decision Node.

Leaf Node

A Leaf node is that type of nodes that do not split further, mentioned as a Leaf Node.

Pruning

At the point when we evacuate sub-nodes of a decision node, this procedure is called pruning. You can say the inverse procedure of splitting.

Branch

A branch is a subsection of the whole tree.

Parent and Child Node

A parent node is a type of node, which is isolated into sub-nodes and is called a parent node, while sub-nodes are the child nodes of the parent node.

Types of Decision Trees

Two principles categorize decision trees depending on the target variable, i.e., categorical variable decision trees and continuous variable decision trees.

Categorical Variable Decision Trees

A Categorical Variable Decision Tree incorporates exact target factors that are partitioned into categories. For instance, the categories can be yes or no. The categories imply that each phase of the decision making procedure can be categorized as one of the categories, and there are no in-betweens. 

Continuous Variable Decision Trees

A Continuous Variable Decision Tree is a decision tree with a constant target variable. For instance, the salary of a person whose pay is obscure can be anticipated dependent on accessible data, for example, their occupation, age, and different consistent factors.

Explanation of Decision Tree

Let’s take two examples. You have two business ideas, one is a candy shop and another is a lemonade stand. You can earn up to $100 with the candy shop option and up to $90 with the lemonade stand option. Which option would you choose? The answer is easy, the candy shop.

DECISION TREE

Let’s make it a little more complex. What if, the candy shop has a 50% chance of success and a 50% chance of failure. If you become successful you will earn $100 and if you fail you will lose $30.

On the other hand in the case of the lemonade stand, you also have a 50% chance of being successful and a 50% chance of getting failed. Here, if you get the success you will earn, $90 but if you fail you will only lose $10.

DECISION TREE

The answer seems to be quite difficult now. But actually, it’s not. You have to use a simple formula. In the case of the candy shop, you have a 50% chance of earning $100 and a 50% chance of losing $30. Add, 50% of $100 with 50% of -$30 and you will get a value of $35. We call it an “expected value.”

Let’s apply the same formula for the lemonade stand. Add 50% of $90 with 50% of -$10. The expected value, in this case, is $40.

Clearly, the expected value of the lemonade stand is higher than that of the candy shop. You should go with the higher expected value.  But what is the meaning of the “expected value? If you implement the lemonade project many times in exactly the same situation then there is a high chance that your average earnings will be $40 per time.

Applications of the Decision Tree

• Variable selection

 The quantity of variables that are routinely observed in clinical settings has expanded dramatically with the upliftment of electronic data storage. A significant number of these variables are of peripheral pertinence, and subsequently, not remembered for data mining exercises.

Like stepwise variable determination in regression analysis, decision tree techniques can be utilized to choose the most important Input variables that ought to be utilized to frame decision tree models, which can in this manner be utilized to plan clinical hypotheses and inform subsequent research. 

• Assessing the relative importance of variables

 Once a lot of applicable variables are recognized, analysts might need to know which variables have a major role. For the most part, the variable significance is dependent on the decrease of model exactness when the variable is empty. As a rule, the more records a variable affects, the more prominent the importance of the variable. 

• Handling of missing qualities

A typical – however wrong – strategy for taking care of missing data is to prohibit cases with missing qualities; this is both wasteful and risks presenting bias in the analysis. 

Decision tree analysis can manage missing data in two different ways: it can either characterize missing qualities as a different category that can be broken down with the other category or utilize a built decision tree model that set the variable with a lot of missing values as a target variable to make an expectation and supplant these missing ones with the predicted value. 

• Prediction

This is one of the most significant utilization of decision tree models. Utilizing the tree model got from verifiable data, it’s quite easy to predict the result for future records.

Conclusion

The Decision Tree has revolutionized the studies in the field of decision making since the 1960s. Although there are a lot of decision-making tools, like the Conjoint analysis, Decision matrix, Pareto analysis, very few are as accurate as the decision tree model. Today, this tool is playing a major role in modern computer learning and algorithm.

Also You can Read our blog on Paired Comparison Analysis- A tool for Decision Making

FAQ’s

Tags :
DECISION MAKING TOOLS,DECISION TREE,WHAT IS DECISION TREE
Share This :

Leave a Reply

Your email address will not be published. Required fields are marked *