Decision and Classification Trees, Clearly Explained!!!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
i like decision trees how about you stat quest hello i'm josh darmer and welcome to statquest today we're going to talk about decision and classification trees and they're going to be clearly explained here is a simple decision tree if a person wants to learn about decision trees then they should watch this stat quest in contrast if a person does not want to learn about decision trees then check out the latest justin bieber video instead in general a decision tree makes a statement and then makes a decision based on whether or not that statement is true or false it's no big deal when a decision tree classifies things into categories it's called a classification tree and when a decision tree predicts numeric values it's called a regression tree in this case we're using diet to predict a numeric value for mouse size note for the remainder of this video we are going to focus on classification trees however if you want to learn more about regression trees fear not there's a whole stat quest dedicated to regression trees the link is in the description below now here's a more complicated classification tree it combines numeric data with yes no data so it's okay to mix data types in the same tree also notice that the tree asks about exercising multiple times and that the amount of time exercising isn't always the same so numeric thresholds can be different for the same data lastly the final classifications can be repeated for the most part classification trees are pretty easy to work with you start at the top and work your way down and down until you get to a point where you can't go any further and that's how you'll classify something note so far i've been labeling the arrows with true or false but usually it is just assumed that if a statement is true you go to the left and if a statement is false you go to the right so sometimes you see true and false labels sometimes you don't it's no big deal oh no it's the dreaded terminology alert the very top of the tree is called the root node or just the root these are called internal nodes or branches branches have arrows pointing to them and they have arrows pointing away from them lastly these are called leaf nodes or just leaves leaves have arrows pointing to them but there are no arrows pointing away from them bam now that we know how to use and interpret classification trees let's learn how to build one from raw data this data tells us whether or not someone loves popcorn whether or not they love soda their age and whether or not they love the 1991 blockbuster cool as ice starring vanilla ice so we will use this data to build this classification tree that predicts whether or not someone loves cool as ice now pretend you've never seen this tree before and let's see how to build a tree starting with just data the first thing we do is decide whether loves popcorn love soda or age should be the question we ask at the very top of the tree to make that decision we'll start by looking at how well loves popcorn predicts whether or not someone loves cool as ice to do this we'll make a super simple tree that only asks if someone loves popcorn and then we'll run the data down the tree for example the first person in the dataset loves popcorn so they go to the leaf on the left and because they do not love cool as ice we'll keep track of that by putting a 1 under the word no the second person in the data set also loves popcorn so they also go to the leaf on the left and because they also do not love cool as ice we increment no to two the third person does not love popcorn so they go to the leaf on the right and because they love cool as ice we put a 1 under the word yes likewise we run the remaining rows down the tree keeping track of whether or not each one loves cool as ice bam now let's do the exact same thing for love soda at the two little trees we see that neither one does a perfect job predicting who will and who will not love cool as ice specifically these three leaves contain mixtures of people that do and do not love cool as ice dread it's another terminology alert because these three leaves all contain a mixture of people who do and do not love cool as ice they are called impure in contrast this leaf only contains people who do not love cool as ice because both leaves in the love's popcorn tree are impure and only one leaf in the love soda tree is impure it seems like love soda does a better job predicting who will and who will not love cool as ice but it would be nice if we could quantify the differences between love's popcorn and love soda the good news is that there are several ways to quantify the impurity of the leaves one of the most popular methods is called genie impurity but there are also fancy sounding methods like entropy and information gain however numerically the methods are all quite similar so we will focus on genie impurity since not only is it very popular i think it is the most straightforward so let's start by calculating the genie impurity for love's popcorn to calculate the genie impurity for love's popcorn we start by calculating the genie impurity for the individual leaves the genie impurity for the leaf on the left is 1 minus the probability of yes squared minus the probability of no squared so we start out with one then we subtract the squared probability of someone in this leaf loving cool as ice which is one the number of people in the leaf who loved cool as ice divided by the total number of people in the leaf four and then the whole term is squared lastly we subtract the squared probability of someone in this leaf not loving cool as ice which is three the number of people in the leaf who did not love cool as ice divided by the total number of people in the leaf squared and when we do the math we get 0.375 so let's put 0.375 under the leaf on the left so we don't forget it now let's calculate the genie impurity for the leaf on the right just like before we start out with one then we subtract the squared probability of someone in this leaf loving cool as ice and the squared probability of someone in this leaf not a loving cool is ice and when we do the math we get 0.444 now because the leaf on the left has four people in it and the leaf on the right only has three people in it the leaves do not represent the same number of people thus the total genie impurity is the weighted average of the leaf impurities we start by calculating the weight for the leaf on the left the weight for the left leaf is the total number of people in the leaf four divided by the total number of people in both leaves seven then we multiply that weight by its associated genie impurity 0.375 now we add the weighted impurity for the leaf on the right which is the total number of people in the leaf 3 divided by the total number of people in both leaves 7 times the associated genie impurity 0.444 and when we do the math we get 0.405 so the genie impurity for love's popcorn is 0.405 likewise the genium purity for love soda is 0.214 now we need to calculate the genie impurity for age however because age contains numeric data and not just yes no values calculating the genie impurity is a little more involved the first thing we do is sort the rows by age from lowest value to highest value then we calculate the average age for all adjacent people lastly we calculate the geniu impurity values for each average age for example to calculate the gd impurity for the first value we put age less than 9.5 in the root and because the only person with age less than 9.5 does not love cool is ice we put a 0 under yes and a 1 under no then everyone with age greater than or equal to 9.5 goes to the leaf on the right now we calculate the genie impurity for the leaf on the left and get zero and this makes sense because every single person in this leaf does not love cool as ice so there is no impurity then we calculate the genie impurity for the leaf on the right and get 0.5 now we calculate the weighted average of the two impurities to get the total genie impurity and we get 0.429 likewise we calculate the genie impurities for all of the other candidate values these two candidate thresholds 15 and 44 are tied for the lowest impurity 0.343 so we can pick either one in this case we'll pick 15. however remember that we are comparing genie impurity values for age loves popcorn and love soda to decide which features should be at the very top of the tree earlier we calculated the genie impurity values for love's popcorn and love soda and now we have the genie impurity for age and because love soda has the lowest genie impurity overall we know that its leaves had the lowest impurity so we put love soda at the top of the tree bam now the four people that love soda go to a node on the left and the people that do not love soda go to a node on the right now let's focus on the node on the left all four people that love soda are in this node three of these people love cool as ice and one does not so this node is impure so let's see if we can reduce the impurity by splitting the people that love soda based on love's popcorn or age we'll start by asking the four people that love soda if they also love popcorn because two of the four people that love soda also love popcorn they end up in the leaf on the left the remaining two people that love soda but do not love popcorn end up on the right and the total genie impurity for this split is 0.25 so let's put 0.25 here so we don't forget now we test different age thresholds just like before only this time we only consider the ages of people who love soda and age less than 12.5 gives us the lowest impurity zero because both leaves have no impurity at all so let's put zero here now because zero is less than 0.25 we will use age less than 12.5 to split this node into leaves note these are leaves because there is no reason to continue splitting these people into smaller groups likewise this node consisting of the three people who do not love soda is also a leaf because there is no reason to continue splitting these people into smaller groups now there is just one last thing we need to do before we are done building this tree we need to assign output values for each leaf generally speaking the output of a leaf is whatever category that has the most values in other words because the majority of the people in these leaves do not love cool as ice the output values are does not love cool as ice and because the majority of the people in this leaf love cool as ice the output value is love's cool as ice hooray we finished building a tree from this data double bam now if someone new comes along and we want to predict if they will love cool as ice then we run the data down our tree and because they love soda they go to the left and because they are 15 so age less than 12.5 is false they end up in this leaf and we predict that they will love cool as ice triple bam okay now that we understand the main ideas of how to build and use classification trees let's discuss one technical detail remember when we built this tree only one person in the original data set made it to this leaf because so few people made it to this leaf it's hard to have confidence that it will do a great job making predictions with future data and it is possible that we have overfit the data note if the term overfit is new to you don't don't instead check out the stack quest on bias and variance in machine learning regardless in practice there are two main ways to deal with this problem one method is called pruning and there's a whole stack quest dedicated to it so check it out alternatively we can put limits on how trees grow for example by requiring three or more people per leaf now we end up with an impure leaf but also a better sense of the accuracy of our prediction because we know that only 75 percent of the people in the leaf love to cool as ice note even when a leaf is impure we still need an output value to make a classification and since most of the people in this leaf love cool as ice that will be the output value also note when we build a tree we don't know in advance if it is better to require three people per leaf or some other number so we test different values with something called cross validation and pick the one that works best and if you don't know what cross validation is check out the quest bam now it's time for some shameless self-promotion if you want to review statistics and machine learning offline check out the statquest study guides at statquest.org there's something for everyone hooray we've made it to the end of another exciting stat quest if you like this stat quest and want to see more please subscribe and if you want to support statquest consider contributing to my patreon campaign becoming a channel member buying one or two of my original songs or a t-shirt or a hoodie or just donate the links are in the description below alright until next time quest on
Info
Channel: StatQuest with Josh Starmer
Views: 52,087
Rating: 4.969697 out of 5
Keywords: Josh Starmer, StatQuest, Machine Learning, Statistics, Data Science
Id: _L39rN6gz7Y
Channel Id: undefined
Length: 18min 8sec (1088 seconds)
Published: Sun Apr 25 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.