JASP - Multiple Linear Regression

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys and welcome back to our series so I'm using Jasper this is still part 11 but it's part 2 for part 11 so in the last video we did excuse me simple linear regression and now we're going to move on to multiple linear regression so we're just get started the example if you need to watch that video it explains more about multiple linear regression in comparison to simple linear regression at the very beginning so now we're still trying to predict heroic capacity which is an indicator of fitness and health and so normally that's kind of a hard thing to do but instead we could rely on using some other ways to predict it like things like you might have on watch for example so I know my watch wouldn't allow me to guess at this estimate rather than sitting on a treadmill until I fell over so we're gonna base this on a weight and heart rate so we're gonna pull up the data here in Jasper well I thought it was let's drink it yes there we go alright so we click the hamburger icon hit open computer and then find where you have these things saved so this is an R folder for this course and this is our regression data so I've got a bunch of things that can predict from here case number really just a participant number with an age waiting heart rate and actually gender predicting maybe vo2 max so we found the regression data and now let's go through all the pieces that we need to first thing as usual is to check assumptions so the assumptions here is that the independent variables are continuous or nominal now we can actually do this with variables that have multiple categories but in our example we're mostly working with content data then the dependent variable also needs to be scale only so here that IV can be either one right can be continuous or nominal it so categories or a ratio or interval style scale but the DB still has to be continuous in some form so at least an interval or ratio the only section in our series here that that's not true is chi-square and that's coming up next okay so pretty much everything you normally do in an undergraduate statistics course is parametric statistics which requires that DV to be continuous all right well we can there's there should be another assumption here in order to check the assumptions well in order to check the next set of assumptions so there's kind of a missing header here because you don't check if it's independent or ratio by running a test you just know okay not independent interval a ratio you just kind of know so in order to check the rest of the assumptions what we'll want to do is run that multiple regression procedure and that's because it will give us the outputs of our residuals which allows us to then look at the normality of the residuals homogeneity and almost get SST okay so let's click regression linear regression and move over the dependent variable it's a regression linear regression the dependent variable here is vo2 max what's next next thing we're going to do is put the independent variables heart rate age and weight in the covariance box right now um it's a little oops I am NOT doing the right thing here vo2 max age weight and heart rate in covariance okay I wish it said independent variables but what it really means like all of the things that we're going to use that might go very when predicting the dependent variable all right what are the options we need all right there are a bunch of assumptions unfortunately for regression so we're gonna break this down into kind of three parts so we're gonna think about independence of cases linearity homoscedasticity and multicollinearity and we're going to put in that Oxford comma cuz it's my favourite these two things are not one thing they're two different ones part two we'll deal with outliers and part three we'll deal with normality okay you can kind of get all of these at once but we'll walk through them one at a time so do I have independence of observations so here we're actually going to ask for the outlier information at the same time so let's click on statistics and then what we'll do here is click on Durbin Watson and case wise Diagnostics now will warn you that this has changed just a little bit so let's take a picture here it's under residuals and we have slightly different options okay so they've changed this and updated it to now include the Durbin Watson test which we'll use for independence but then the case wise Diagnostics our new option is standardized residual which gives us approximately the same answer we had in these guides before it's just that there are no options so when we ask for that Durbin Watson it adds it up here to the model summary box and that has been updated as well oh these tables in Word are so hard to get rid of like why does word hate me so much right there we go I think we all feel this way sometimes and we're really interested in this number here and it didn't change it's just the box looks different so the Durbin Watson statistic here is 2.2 9 and if you remember from the last video it will come range from 0 to 4 and we want things that are close to 2 okay and that means there's no correlation in the residuals sometimes it's called autocorrelation and the idea is that my data should be mile and it should not be related to your data i remember we measure relation with correlation you see our value is very close to two so we probably have independence of residuals there's also a newly added p-value on this statistic that's greater than 0.05 as our alpha so it's probably okay remember we don't want our assumptions to be significant because that would be significantly bad all right so generally with independence even though there's a test a lot of times people don't run the test because you just have to kind of know in your research design that you did this correctly so when people took the study they weren't helping each other or when you had participants come into the lab they each filled out their own survey that sort of thing so a lot of times that we just kind of know that through our research design that we've made participants independent if you do have correlated errors you have to do a different type of analysis all right so is there a linear relationship between the variables all right so what will do this come down here and it's under plots okay I think in our old one it was under assumptions checks but now it's under plots so I'll update that but it's the same pictures that we're interested in residuals versus predicted residuals sources histogram and QQ so we want all of those okay so what you do here is click on plots and it's the same basic idea I'll update these pictures on all of it so if our residuals do to do and our residuals versus predicted make this sort of square plot then we probably have linearity okay we don't want a rainbow shape or any kind of curved shapes if it looks like a chair or a rainbow that's bad and you can see that when looking at this plot which we'll use several times last time in the correlation or simple regression plot we just looked at the the X versus Y well now we have multiple XS versus Y so we kind of have to to include all of those and so this plot will use a couple of times for homoscedasticity and you don't want this kind of plot we made this one in spss but you don't want this rainbowy plot so linearity is okay so we we have independence linearity what about almost good esta city well it's the same picture as the one we were just looking at and I'm gonna cut and paste it again just so that you can see and it's not actually slightly different yes yes yes max trying to keep me safe from pasting a picture it was really the same picture this just gives me a slightly different X my outputs all right so what we're interested in seeing is that there's sort of a rectangular relationship so to speak where we don't want any kind of like these shapes or megaphones or movie or bumps in the data so I always joke about like no rainbows wait for linearity no cheerleaders for megaphones and no raining we don't want a weird spread of the data that's what I could do to kind of help myself understand that is what we would want here is this sort of like a square it's not square that's a rectangle I'll get this right one day we're there most of the dots kind of fall in this kind of equal spread shape instead what we might find if we draw a line around the data and let's see if I can get this I can't draw this line perfectly I'm the math person not a artiste all right so I'm tryin to make the line fatter let's try over here yeah this is how much I use office no right okay let's make it not blue let's make it black so we're gonna draw those lines around the data I just undid everything I did all right there we go so let's just copy this so you guys don't have to watch me struggle with work all right so we draw a line this is a little bit the misshaped it's not terrible and so we want to see that the data still fits this kind of blue area and it's pretty close I mean it's not perfect over here on this side we're missing some dots down here but that's pretty close my joke here is always be nice to these plots okay but this is mostly homogenic okay or homoscedasticity as a reminder is that the spread of the variance is equal all the way down why okay these are predictive values for all of the residuals okay so the spread is equal down Y and X basically in these plots so the spread is the same all the way down this graph okay you don't want triangle shapes that would be bad and there are some ways to fix this not a whole bunch of good ones but mostly it requires switching to a different type of regression and so what we're saying here is that this one is homoscedasticity much the same so I used to have students draw these kind of squares around it and if you get that kind of rectangular shape you're probably okay if you're getting kind of a Dorito chip shape that's not good alright so what's another one so let's see we were checked independence linearity normal we have not done normality I like to you independence linearity that was homoscedasticity let's also think about multicollinearity now so multicollinearity occurs when you have two or more variables that are highly correlated so the this is the problem the assumption is actually called additive 'ti we want each variable to add something to the equation and if it doesn't they're multilinear and essentially this idea is that two variables are basically the same so why are you using both of them just use one of them so we can look at the correlation coefficient between the IV's only we don't care about the correlations between IV and the DV that's part of the actual regression that's the question of regression but here is the correlation between the IV's and we can also look at what are called tolerance or vith values so here we're gonna click on most of this open let's see where did we say it was oh we can actually run this in the correlation window or so that's one way to do it where I think that's a part partials that's not what we want oh can I see the correlations covariance matrix no okay we'll do this in correlations I thought there was an option here I got excited about part partials here but let's do correlations so that's under regression correlation matrix only your IVs don't confuse yourself by putting you know DB age weight and heart rate now remember we can also do display pairwise table that's a such a great way to read these so you can either read the kind of triangle shape one or we talked about this in the correlation section the display pairwise table which only shows ok shows it to a little easier way now significance is maybe not so important here but we just want to look at the actual physical numbers for them and word has taken over on this table it does not like me so we're interested in more of just the number and what we don't want to see is really high values okay so we can check that none of the independent variables have correlations greater than 0.7 and we can see from that table that none of them are greater than 0.7 okay and it actually would not suggest at all running the Divi so let's take this bad boy out because I think it confuses students because you want you look at all of them and forget don't look at the DB so pretty much don't run the DB okay so move over the variables and then click on display pairwise to all right now to get tolerance and viv values so we'll still use our regression that we had gone so I'm just going to click to go back to it and that's here under collinearity died out diagnostics and so we'll see these now add it to our model table okay so let's go back under cuff coefficients here what we can see is the collinearity the Diagnostics have been added here it also adds some stuff down here we're not really interested in we're gonna focus on this piece so what are those scores well tolerance values and vith are kind of like a seesaw of each other they're reciprocals okay so this is one divided by our install we need to look at one of them so you can pick which one your instructor likes we want tolerance values to be less than 0.1 which is a VIP of greater than point 10 oh I'm sorry that would be bad if tolerance is less than point one and then if if is greater than point two are greater than ten that would be bad okay here our tolerance values are all really high and what you can notice if you check out the correlation table okay so age to wait here and age to heart rate age in general has a really high value and that's a reciprocity opposite of how those correlations work so these are low so tolerance is really high and those are also a seesaw they're not perfect like I can't do any subtraction here but the idea is that if your correlations are really high the tolerance of each other so the way I remember tolerance really is how much do you like your siblings okay so like if you because you really you're related to each other um and so like it says how much you can put up the variables can put up with each other in the equation okay and that value is really low they can't put up with each other there's low tolerance to having both of them in the room at the same time or both of them in the equation at the same time and this stands for variance inflation factor which means it's how much is like kind of is happening like causing the equation to change so to speak if I add both of them in okay so they're kind of they're opposites of each other because tolerance is like how much the equation can tolerate having both of them or vyas how much it's like wildly varying by including both of them all right so all of our tolerance values are greater than point one the lowest one being point nine seven nine okay that was a typo so we can be confident we don't have collinearity okay the best thing to do when you do have multicollinearity it's just to take one of them out so you look at the two that are very highly correlated and just take one out and that is we're having both of these tables are really useful so let's say one of these numbers is really low usually they'll go low together because they're the two better to the problem and so then I could come over here and say okay it's age and wait it's not the moment but pretend for me let's say we're including two measures of IQ or something take one of them out all right so do we have any outliers well probably not but let's go look at our case wise Diagnostics see how it's empty and that's because we don't have any outliers if we did he would seesaw a number here which indicates the row number for the case that's the problem all right and that standardized residuals still three standard deviations just the visual of this has changed all right are the residuals normally distributed so I think we're about to hit that last one here and you look at this histogram and or a normal QQ plot to determine this I like the histogram the most you want here in the standardized residuals and this is towards the bottom here we just haven't scrolled that far for the distribution would be centered over zero and to be kind of mostly between two and two so when we've also added that a density curve we want to look mostly normal okay and that looks pretty good that's a pretty pretty it's a pretty picture the other thing you can do is look at these QQ plots I often use these for linearity if you have carbon linearity you'll see some some problems here but you want the dots to line up on the line and be nice to the plot out past two because it's very hard these mu Z scores remember to predict scores that are that far away from the mean it looks pretty good so this is how you tell it's fake data it's cuz everything looks nice right so the normality is good linearity is good and so I just want to remind like we've kind of did these in an order a little bit different from the top so we're gonna go back up here to the top and just kind of remind you what they are since I didn't start here and these are the answers so here are the assumptions right are two or more independent variables are continuous or nominal yes our dependent variables at least interval scale yes okay there's a linear relationship yes we looked at that residual scatterplot to make sure there were no rainbows there were no outliers great case wise Diagnostics we had independence of errors using the Durbin Watson statistic a homoscedasticity we looked at that residuals plot and it was fairly rectangular the residuals are normally distributed that's our last graph we looked at and we didn't have multicollinearity we were kind of regrouped these at the bottom based on what options we picked because some of them go together like checking linearity and checking homoscedasticity is kind of the same plot alright and that is the thing that always takes the longest so now let's do the fun part which is actually running the regression I think so that's gonna be way down here sorry that was the other one yeah so first question what can I answer with a multiple linear regression well I can think about how well I can predict so I can determine the proportion of variance in the DV that I can explain by my IVs so how good are we at predicting vo2 max with age weight and whatever that last one was okay I can use that equation then to predict new values and that's really handy as you get into analytics because businesses are very interested in that question right how do I predict new values and so scientists and then how do we determine which variable is the most interesting one so I generally kind of lump this into the overall equation how good is my prediction overall and then the individual coefficients which one is the best one in predicting or do I have several very good ones because it could be that the overall equation is predictive but only one of the variables is carrying the weight so you can think about this like a group project right there's only one of you doing the work then you're the coefficient that's working all right so first we want to think about if the regression model is a good fit for the data we can use the multiple correlation coefficient but I which is our big R I'm gonna suggest R squared because that's more common which is the percent or proportion of variance explained in the DV by the IV's and then for that we can think about these statistical significance if that values more than 0 because if you predict none of the variance it would be zero or the precision of the predictions from the model so there's a lot of way to do ways to do this most people use BSC here so the choral r-squared values and then the F statistic with a p-value then we'll understand if the coefficients are any good and generally we'll look at their T scores okay there is a T statistic and their p-values and then if we think they're important we can in turn for them by using their coefficients alright so how well does our model fit can we predict vo2 max so that first box is called model summary for a reason so this is the overall model and we're really interested generally in these two okay so R is is the multiple correlation coefficient this is the relationship between the predicted score and my actual score r-squared is just the transform of that into a proportion of variance accounted for that's what this whole box explains to you so it is a correlation between why my actual score remember to get score okay so it runs from zero to one and zero means I cannot predict Y at all and one means I predict Y perfectly and I've probably done something wrong because that would be very hard this indicates a moderate level of relationship if we use those correlation rules much more popular model as it says here it's R and R square okay I see most people use regular R squared and not adjusted R squared but this is a measure of the proportion of variance explained by the independent variables in this case this is over and above the mean model and what the mean model means is like the best guess if you know nothing about age weight or whatever your best guess at predicting people's scores is the mean so Y the Y the mean of Y so if you don't know anything else you could just guess the mean and you might be pretty good especially if people are all fairly close to the mean so there's not a lot of variance in the data but you could probably do better than the mean so if we can do better than the mean how much better are we doing and that's the purpose of R squared and of course I forgot to turn on Do Not Disturb so one real quick I swear it's like every third video I'm like oh darn I forgot all right so this section here just explains what could be predicting just with the mean so I won't bore you to death by saying this again but our R squared here is 0.15 okay that's 15% of the variability in vo2 max can be predicted over and above just guessing the mean that's not bad that's pretty good people vary a lot there are probably a lot of other reasons that people have differences in their vo2 Max and it's sometimes very hard to predict people right and that generally is considered somewhere in the medium range depends on which version of r-squared your instructor uses okay not mathematically but the interpretation of that coefficient so sometimes people like to adjust r-squared because it has been shown to be a little bit optimistic right so sometimes it's considered a positively biased estimate which means that it tends to overestimate just slightly and so some people use the adjusted version which tries to control for the fact that we know this overestimates a little bit so that tells me practically some practical significance sometimes is what this called like how much does is are we you know how much better are we doing van the mean if you want to attach a p-value to that you can use the ANOVA section it is fine an ANOVA it's right here okay so it's a second box down to overall then ANOVA and what the ANOVA box tells me is this is still remember that ANOVA is a special form of regression but we're still using like I know like the test that we did a couple videos ago we're still using the F statistic to determine so we can kind of use the kind of ANOVA formula to calculate so now it's how much are we predicting versus how much error there is so then our stats are app video it's still this idea of like difference on top over error on the bottom but here we're just seeing like how much better than the mean are we doing our p-value is less than 0.05 which we've picked as our type 1 error rate and so we would say that this would be significant since since oh 2 is less than 0.05 here so we're better at predicting than the mean and the null hypothesis for this test is that R or R squared really since they're just squares of each other is zero so we can't predict at all right and the alternative is that it's better than zero and since all this stuff is squared it's only you can't do a two-tailed test so we're pretty much saying since its squared it has to be bigger than zero and we can write this just like we wrote our novos before and we want to plug in that r-squared value at the end so quick reminder what all these things mean f is that we're using an F test or an F statistic 3 is the regression of the degrees of freedom for a regression and that's three because we have four predictors right we have intercept which most of us kind of ignore and then we have age I can't even I cannot remember age heart rate and weight okay so you'll see there are four lines here remember the degrees of freedom is kind of some scores minus one so it's four minus one here and there's a confusion that people think this is three because there are only three predictors well that works out because we we tend to ignore the intercept the intercept is a predictor or well it's not really predictor but it is an important part of the equation the unstandardized equation and so it's still a minus one okay so it's one two three four minus one here in 96 is our residual or error degrees of freedom and that's based on the size of a sample minus predictive stuff the F score here five point five okay this is a measure of right are mean squares ratio of our pretty Kipnis with regression versus the error and then our p-value based on that F statistic so overall our model appears to be statistically significant it may be practically useful so here's the equation vo2 max is equal to B naught or B 0 which is their intercept plus the coefficient times h plus the coefficient times weight plus the coefficient times heart rate that's really awesome so we could plug in the numbers for that and use that as a predictive equation so I could guess someone's Bo - max given all this information and a lot of apps or watches will do this like my watch will provide this information and that's because I've entered most of this and it knows my heart rate because it measures it right you might use some other statistics as well so from that how do we plug and chug those numbers in here are all the B values that we're interested in so it says unstandardized the implication here is that that's unsanitized coefficient b the standardized coefficient is beta okay there's a blank here because when you standardize you're taking out the mean then we have our statistic on whether or not these numbers here are different from zero all right so now we're interested in knowing which coefficient is carrying the weight are they all predictive or is there only one or two of them so there's a lot of descriptive text here that I can that you can hang out and read but I think you're mostly gonna be interested in here let's just make the font just a little bit smaller so it all lines up a little bit easier to read there we go mostly here we're gonna kind of come over and see which ones are are significant so it does not appear the heart rate is doing us any good now a lot of times people will focus here on the B value itself but remember that B value is based on the scale of the data and so heart rates gonna be a lot more variable than age okay so it might have a small a smaller number because you know it can increase more steps and so the unstandardized coefficients really good for interpretation but don't compare them directly to each other so I cannot say that this one is five times more or that's three mathematically sorry three times more because they're on different scales the standardized coefficient allows me to think about them at standard deviation form these like the z-scores version okay I could compare those directly but this first one is not important it did not help us in predicting for our equation the second one age here did so for every one unit increase in age so for every year of life you're getting a point one nine decrease in vo2 max so as we get older our vo2 max capabilities decrease the next one is also weight that's also important and now they have the same coefficient here so for every one extra pad on that we get a decrease in vo2 max by approximately the same amount however it's really interesting to look at the standardized coefficient because that to me says that weight is is doing more for the equation so it has a larger standardized coefficient call our Jersey scored one beta which implies that it is a stronger predictor now remember we've talked about p-values don't say that something is more significant but instead we can use this column to kind of help us determine like which one is a better predictor okay I always send us a stronger predictor right because this one's still important according to our p-value rules but this one is doing a little bit more of the work okay in predicting so this is why you can't compare these directly because they're on different scales right weight and age are very different skills all right so all that that I just went over yeah it was down here now if I filled in so let's say you have an assignment question that's like what with the vo2max be if you were you know 18 and 135 pounds that kind of thing you just does how you do it so you'd fill in those B values and then you would just calculate in the math a sweet plug and chug kind of I was wonder if there's like a version of that in other languages like to those words rhyme know that languages but the idea here being that we would fill in the age weight and heart rate of a participant to guest at vo2 max all right one other thing that you can add that's not in this guy that I will add that I think is important that your instructor might ask for are these part and partial correlations so I'm gonna add those real quick click on coefficients to copy that talked a little bit about those so I kind of hinted at how I can compare which coefficient is is useful versus better they're linked okay where did the part of partials go oh it just puts them in a new box okay well sometimes it sticks them out on the end I guess that's SPSS go away this one okay so I what part and partials are our measures of effect size for each coefficient okay so they're tied to this box I actually expected them to be in this box cuz that's where they have appear in SPSS but here in Jasper they're in their own fancy box so part of partial correlations are sometimes covered sometimes not so if your instructor has not talked about this at all you can like let on that and ignore me but there this idea of how much variance is accounted for controlling for all the other variables and so this allows me to to look at the statistics here of like how much is is going on when all the other ones are controlled for and there's a distinction mathematically between what happens with part partials and what happens with parked this parked thing should be called a semi partial that's how it's generally referred to in books partials are that partials you can tell because partials are usually larger unless they're all 0 mathematically I like to look at partials here because that it's easier to interpret personally than part or semi partials that's this idea of like controlling for everything else like taking all the other variants due to weight and due to heart rate and due to the overlap how much does age account for but we have to remember so you want to square them if you want to think about variants so this first one age that's a correlation so 0.2 squared I think is 0.04 yeah so that's about 4% of the variance in vo2 Max is accounted for by age controlling for everything else and so these numbers here will also kind of like mirror the standardized coefficients they're not the same thing at all but they give you the same feeling right so heart rate was the lowest age and then weight and I don't know why they're in a different order here but heart rate is the lowest age and then weight so notice how these kind of mirror each other okay it's not perfect it's not always true but weight here is 0.33 and that's about 11% of the variance now those do not add up to R squared so it's very tempting to think oh I'll just add all these up and it will equal R squared that's not how this equation works it controls for the overlap so sometimes age and weight overlap alright so as we get older our metabolism slows down and maybe we gain weight okay so any of that overlap is not included big R squared thinks about like the total amount of variance these partial correlations are PR squared thinks about like okay take out everything else due to every other variable and that includes the overlap how much is this one accounting for on its own controlling for everything else so these don't add up very same thing but they don't okay if they do they do but they're not suppose they aren't supposed to all the time it's not a way to check and I'll add a little bit more here on PR squared for people who are just reading along alright so how would I report this all together remember the rules first tell us a little bit about the study so a multiple regression was analyzed oh I hate when people say Brown was analyzed you can say was Ron this is just an error in particular thing but what's analyzed to predict vo2 max from age weight and heart rate okay the data was screened for assumptions so talk about your data your assumptions checks no outliers we met all of our assumptions yeah okay so all assumptions of when let's do here were were bound to be met and no multicollinearity was the present okay mmm that tweak here so all assumptions of these were found to be met and no multi-linear was president it's cuz multi Klein Airy is not like it's the bad thing right so no multicollinearity you could say you met the assumption of additivity I'm not sure your instructors gonna be that particular but this would be a bit more like statistic II flow wise better now so we've done tell me about the date tell me about the study tell me about the assumptions next one tell me about the model okay so the model was statistically significant the multiple regression model statistically significantly predicted okay I don't love the word statistic here but that tells us that it's not practical significance right and then our p-value and this is actually just regular r-squared and not adjusted so about 15% all three variables added to the prediction and that implies that they're all significant but they weren't were they so what we should say is regression coefficients and standard errors can be found at table one heart rate was not related to vo2 Max while age and weight both negatively predicted such as increasing age and weight load the vo2 max so I really like this sentence because that tells you the practical usefulness of this so as aging wait go up vo2 max go down and you can put them in a table because this is already nicely APA formatted whew now if your instructor wants you to put those values in the write-up and they don't let you do the table let's add those okay so heart rate was not related to vo2 max how I might report that remember IPA style is tea our degrees of freedom for tea are the second degree of freedom here for F okay that's not really ever told anywhere on the box but this the residual degrees of freedom we report the T values negative 1.33 put a for our p value was point one eight seven and if you want to from the box above I could report that PR squared value sometimes people put B in here as well oh come on now italics don't be rude there B equals what was it negative point zero six and we would do that for all three of them now I've got the square brackets here because sometimes people use square brackets when they're reporting it in kind of like what would normally be parentheses but since these already have parentheses sometimes people like the square brackets to be less confusing so we do the same thing here while age okay so negative point one time I realized here that's just did a dyslexia and I switched them real mm-hmm tea still 96 it hasn't changed so two point way cage make sure look at the right one yeah two point one two is equal to point zero three seven and then wait one more time okay yeah so let's see here it's 0.18 and you'll want to pay attention to what your instructor wants because sometimes they want you to report beta instead of B okay anything you'd put a beta here instead of B it's the only thing you'd change really and then our p-value is less than 0.01 now tables much easier took me forever to type that but in case your instructor wants this like what was the coefficient an apa-style that's how you do it alright so this video covered all of multiple regressions I'm talking about assumptions working through the analysis and then we added a couple of little other things that you might see in a report such as the P R squared values so that covers part eleven regression and the next and final section of our just videos will be over chi-square and that will now move us from parametric to nonparametric statistics
Info
Channel: Statistics of DOOM
Views: 6,065
Rating: 4.9012346 out of 5
Keywords: statistics, regression, multiple linear regression, multiple variables, data, data science, stats tools, statistics of doom, jasp, spss
Id: -wYdiwwn4jY
Channel Id: undefined
Length: 46min 10sec (2770 seconds)
Published: Tue Mar 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.