Regression Trees, Clearly Explained!!!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
regression tree is for you and for me stat quest hello I'm Josh Starman welcome to stat quest today we're going to talk about regression trees and they're gonna be clearly explained this stat quest assumes you are already familiar with the trade-off that plagues all of machine learning the bias-variance tradeoff and the basic ideas behind decision trees and the basic ideas behind regression if not check out the quests the links are in the description below now imagine we developed a new drug to cure the common cold however we don't know the optimal dosage to give patients so we do a clinical trial with different dosages and measure how effective each dosage is the data looked like this and in general the higher the dose the more effective the drug then we could easily fit a line to the data and if someone told us they were taking a 27 milligram dose we could use the line to predict that a 27 milligram dose should be 62% effective however what if the data looked like this low dosages are not effective moderate dosages work really well somewhat higher dosages work at about 50% effectiveness and high dosages are not effective at all in this case fitting a straight line to the data will not be very useful for example if someone told us they were taking a 20 milligram dose then we would predict that a 20 milligram dose should be 45% effective even though the observed data says it should be 100% effective so we need to use something other than a straight line to make predictions one option is to use a regression tree regression trees are a type of decision tree in a regression tree each leaf represents a numeric value contrast classification trees have true or false in their leaves or some other discrete category with this regression tree we start by asking if the dosage is less than 14.5 if so then we are talking about these six observations in the training data and the average drug effectiveness for these six observations is 4.2 percent so the tree uses the average value for point two percent as its prediction for people with dosages less than fourteen point five on the other hand if the dosage is greater than or equal to 14.5 and greater than or equal to 29 then we are talking about these four observations in the training data set and the average drug effectiveness for these four observations is 2.5 percent so the tree uses the average value 2.5 percent as its prediction for people with dosages greater than or equal to 29 now if the dosage is greater than or equal to 14.5 and less than 29 and greater than or equal to 23 point 5 then we are talking about these 5 observations in the training data set and the average drug effectiveness for these 5 observations is 52.8% so the tree uses the average value 52.8% as its prediction for people with dosages between 23 point five and 29 lastly if the dosage is greater than or equal to 14.5 and less than 29 and less than 23 point 5 then we are talking about these four observations in the training data set and the average drug effectiveness for these four observations is 100% so the tree uses the average value 100% as its prediction for people with dosages between 14.5 and 23.5 since each leaf corresponds to the average drug effectiveness in a different cluster of observations the tree does a better job reflecting the data than the straight line at this point you might be thinking the regression tree is cool but I can also predict drug effectiveness just by looking at the graph for example if someone said they were taking a 27 milligram dose then just by looking at the graph I can tell that the drug will be about 50% effective so why make a big deal about the regression tree when the data are super simple and we are only using one predictor dosage to predict drug effectiveness making predictions by eye isn't terrible but when we have three or more predictors like dosage age and sex to predict drug effectiveness drawing a graph is very difficult if not impossible in contrast a regression tree easily accommodates the additional predictors for example if we wanted to predict the drug effectiveness for this patient we would start by asking if they are older than 50 and since they are not over 50 we follow the branch on the right and ask if their dosage is greater than or equal to 29 and since their dosage is not greater than or equal to 29 we follow the branch on the right and ask if they are female and since they are female we follow the branch on the left and predict that the dosage will be 100% effective and that's not too far off from the truth 98% okay now that we know that regression trees can easily handle complicated data let's go back to the original data with just one predictor dosage and talk about how to build this regression tree from scratch and since regression trees are built from the top down the first thing we do is figure out why we start by asking if dosage is less than 14.5 going back to the graph of the data let's focus on the two observations with the smallest dosages their average dosage is three and that corresponds to this dotted red line now we can build a very simple tree that splits the observations into two groups based on whether or not dosage is less than three the point on the far left is the only one with dosage less than three and the average drug effectiveness for that one point is zero so we put zero in the leaf on the left side for when dosage is less than three all of the other points have dosages greater than or equal to three and the average drug effectiveness for all of the points with dosages greater than or equal to three is thirty eight point eight so we put 38.8 in the leaf on the right side for when dosage is greater than or equal to three the values in each leaf are the predictions that this simple tree will make for drug effectiveness for example this point on the far left has dosage less than three and the tree predicts that the drug effectiveness will be zero the prediction for this point drug effectiveness equals zero is pretty good since it is the same as the observed value in contrast for this point which has dosage greater than three the tree predicts that the drug effectiveness will be 38.8 and that prediction is not very good since the observed drug effectiveness is 100% note we can visualize how bad the prediction is by drawing a dotted line between the observed and predicted values in other words the dotted line is a residual for each point in the data we can draw its residual the difference between the observed and predicted values and we can use the residuals to quantify the quality of these predictions starting with the only point with dosage less than three we calculate the difference between it's observed drug effectiveness zero and the predicted drug effectiveness zero and then square the difference in other words this is the squared residual for the first point now we add the square residuals for the remaining points with dosages greater than or equal to three in other words for this point we calculate the difference between the observed and predicted values and square it and then add it to the first term then we do the same thing for the next point and the next point and the rest of the points do - ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii ii until we have added squared residuals for every point thus to evaluate the predictions made when the threshold is dosage less than three we add up the squared residuals for every point and get twenty seven thousand four hundred sixty eight point five note we can plot the sum of squared residuals on this graph the y axis corresponds to the sum of squared residuals and the x axis corresponds to dosage thresholds in this case the dosage threshold was three but if we focus on the next two points in the graph and calculate their average dosage which is five then we can use dosage less than five as a new threshold and using dosage less than five gives us new predictions and new residuals and that means we can add a new sum of squared residuals to our graph in this case the new threshold dosage less than five results in a smaller sum of squared residuals and that means using dosage less than five as the threshold resulted in better predictions overall BAM now let's focus on the next two points calculate their average which is seven and use dosage less than seven as a new threshold again the new threshold gives us new predictions new residuals and a new sum of squared residuals now shift the threshold over to the average dosage for the next two points and add a new sum of squared residuals to the graph and we repeat until we have calculated the sum of squared residuals for all of the remaining thresholds T 2 T 2 2 2 2 2 2 BAM now we can see the sum of squared residuals for all of the thresholds and dosage less than 14.5 has the smallest sum of squared residuals so dosage less than 14.5 will be the root of the tree in summary we split the data into two groups by finding the threshold that gave us the smallest sum of squared residuals BAM now let's focus on the six observations with dosage less than 14.5 that ended up in the node to the left of the root in theory we could split these six observations into two smaller groups just like we did before by calculating the sum of squared residuals for different thresholds and choosing the threshold with the lowest sum of squared residuals note this observation has dosage less than 14.5 and does not have dosage less than 11.5 so it is the only observation to end up in this node and since we can't split a single observation into two groups we will call this node a leaf however since the remaining five observations go to the other node we can split them once more now we have divided the observations with dosage less than 14.5 into three separate groups these two leaves only contain one observation each and cannot be split into smaller groups in contrast this leaf contains four observations that said those four observations all have the same drug effectiveness so we don't need to split them into smaller groups so we are done splitting the observations with dosage less than 14.5 into smaller groups note the predictions that this tree makes for all observations with dosage less than 14.5 are perfect in other words this observation has 20% drug effectiveness and the tree predicts 20% drug effectiveness so the observed and predicted values are the same this observation has 5% drug effectiveness and that's exactly what the tree predicts these four observations all have 0% drug effectiveness and that's exactly what the tree predicts is that awesome no when a model fits the training data perfectly it probably means it is over fit and will not perform well with new data in machine learning lingo the model has no bias but potentially large variants bummer is there a way to prevent our tree from overfitting the training data yes there are a bunch of techniques the simplest is to only split observations when there are more than some minimum number typically the minimum number of observations to allow for a split is 20 however since this example doesn't have many observations I set the minimum to 7 in other words since there are only six observations with dosage less than 14.5 we will not split the observations in this node instead this node will become a leaf and the output will be the average drug effectiveness for the six observations with dosage less than 14.5 4.2% BAM now we need to figure out what to do with the remaining 13 observations with dosages greater than or equal to 14.5 since we have more than 7 observations on the right side we can split them into two groups and we do that by finding the threshold that gives us the smallest sum of squared residuals note there are only four observations with dosage greater than or equal to 29 thus there are only four observations in this node thus we will make this a leaf because it contains fewer than seven observations and the output will be the average drug effectiveness for these four observations 2.5% now we need to figure out what to do with the nine observations with dosages between 14.5 and 29 since we have more than seven observations we can split them into two groups by finding the threshold that gives us the minimum sum of squared residuals note since there are fewer than seven observations in each of these two groups this is the last split because none of the leaves have more than seven observations in them so we use the average drug effectiveness for the observations with dosages between fourteen point five and twenty-three point five one hundred percent as the output for the leaf on the right and we use the average drug effectiveness for observations with dosages between twenty three point five and twenty nine fifty two point eight percent as the output for the leaf on the Left since no leaf has more than seven observations in it were done building the tree and each leaf corresponds to the average drug effectiveness from a different cluster of observations double BAM so far we have built a tree using a single predictor dosage to predict drug effectiveness now let's talk about how to build a tree to predict drug effectiveness using a bunch of predictors just like before we will start by using dosage to predict drug effectiveness thus just like before we will try different thresholds for dosage and calculate the sum of squared residuals at each step and pick the threshold that gives us the minimum sum of squared residuals the best threshold becomes a candidate for the root now we focus on using age to predict drug effectiveness just like with dosage we tried different thresholds for age and calculate the sum of squared residuals at each step and pick the one that gives us the minimum sum of squared residuals the best threshold becomes another candidate for the route now we focus on using sex to predict drug effectiveness with sex there is only one threshold to try so we use that threshold to calculate the sum of squared residuals and that becomes another candidate for the route now we compare the sum of squared residuals SSRS for each candidate and pick the candidate with the lowest value since age greater than 50 had the lowest sum of squared residuals it becomes the root of the tree then we grow the tree just like before except now we compare the lowest sum of squared residuals from each predictor and just like before when a leaf has less than a minimum number of observations which is usually 20 but we are using 7 we stop trying to divide them triple bow in summary rushon trees are a type of decision tree in a regression tree each leaf represents a numeric value we determine how to divide the observations by trying different thresholds in calculating the sum of squared residuals at each step pp boo boo beep beep boo boo peepee poo-poo the threshold with the smallest sum of squared residuals becomes a candidate for the root of the tree if we have more than one predictor we find the optimal threshold for each one and pick the candidate with the smallest sum of squared residuals to be the root when we have fewer than some minimum number of observations in a node 7 in this example but more commonly 20 then that node becomes a leaf otherwise we repeat the process to split the remaining observations until we can no longer split the observations into smaller groups and then we are done hooray we've made it to the end of another exciting stat quest if you liked this stack quest and want to see more please subscribe and if you want to support stack quest consider contributing to my patreon campaign buying one or two of my original songs or a t-shirt or a hoodie or just donate the links are in the description below alright until next time quest on
Info
Channel: StatQuest with Josh Starmer
Views: 222,909
Rating: 4.9745421 out of 5
Keywords: StatQuest, Josh Starmer, Regression Trees, Machine Learning, Decision Trees, Data Science, Gradient Boost, CART
Id: g9c66TUylZ4
Channel Id: undefined
Length: 22min 33sec (1353 seconds)
Published: Mon Aug 19 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.