5 Course Meta-Analyses VU: Examining heterogenity

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the next step in this course in doing meta-analysis is that you have to examine heterogeneity so what I what I will say here is I will first explain what heterogeneity is then I will say something about how you can examine causes possible causes of heterogeneity and then I will say something about publication bias publication bias is not directly related to heterogeneity but it didn't know where else to put it in this course so I will talk about it here so what is heterogeneity well this is the the formally statistical heterogeneity is the variability and the intervention effects being evaluated in the different studies but what it means is that you that's what you do when you look at heterogeneity is that you suppose you have just a random collection of effect sizes which are you know there are just effect sizes which have nothing to do with each other and if you would plot it on a on a screen you would see no connection between them if that's the case then you have 100% heterogeneity if you see that these effect sizes are related to each other pointing in the same direction large studies more precisely going to that mean effect size and small studies differing somewhat more but they all point in the same direction then that sample of study is homogeneous and there is no heterogeneity that's I think the best way to understand what it means statistical heterogeneity so that the relationship between these effect sizes that's a really a statistical issue it's something different from clinical heterogeneity so you can you can have a collection of effect sizes which are statistically not heterogeneous but then if you look at the studies there may be all kinds of clinical hetero hetero genius issues different patients there's different therapists the contents of the therapy is different the recruitments is different but that's that's clinical heterogeneity how do you think from a clinical content point of view how these studies differ from each other but statistical heterogeneity is easier because we can examine that so heterogeneity is important because if there is no heterogeneity it makes sense to look at these studies as one group of studies and pull make one pooled effect size if you do have heterogeneity then there is something that makes these studies different from each other and then do not all points in the same direction and you can use in such a case the random effects model because that allows for individual studies to have some more very and some more difference from the other studies but you you if these studies differ from each other to a certain extent you want to know why and that is very important because suppose you do a meta-analysis we did a meta-analysis so for example on problem solving for depression a couple of years ago and we found that problem solving for depression was effective it had a very good effect so it's nothing very different from other psychological treatments for depression but what we also found is that heterogeneity was very high and we did all kinds of analysis subgroup analysis looking at outliers looking at moderators meta regression analysis but we could not find a reason for that heterogeneity so what we the only conclusion then what you can draw is that you can say well basically problem solving may be effective but it's also possible that if you do it that it's not effective or the effects are much smaller and we can not predict when the effect size are larger and smaller so basically such this they say say then you cannot say anything about the effects of problem-solving and that's the problem of heterogeneity so in any matter analysis you do heterogeneity is a key concept you have to examine heterogeneity and if you find that you have to examine possible sources of heterogeneity so how do you do that how do you examine heterogeneity well you can look at outliers you can do subgroup analysis moderator analysis or you can do matter regression analysis and I explained the basic principles of these things too so but first how do you examine whether there's heterogeneity so how can you examine in your data whether there is heterogeneity or not well first you can look to the forest plot then you can do a test for heterogeneity it's their significance at unity and the third is that you can quantify heterogeneity by I square which is an indication of heterogeneity in percentages so first the visual inspection of the forest plots what you do is you just look at the forest plot and you look at two day points in the same direction and I will given few examples of this this is another analysis we published in psychological medicine in which we compared psychotherapy with little placebo and what you see here is that the effect all the effect sizes in 95 confidence interval of the effect sizes overlap the effect sizes do not differ that much from each other and they all point about in the same direction you see that at the at the left down in the screen also that I square was 0% so there's no indication of significance heterogeneity but you also have to remember here that this is vary depending on the confidence interval and you still see that a 95 confidence interval around I square is quite large and that has to do with the fact that the number of studies is quite small so we assume there's no heterogeneity but we're not absolutely certain here you see another study we published in maturita on psychotherapy for depression in all our adults and if you look at these effect sizes I would if I look at this this is that they differ much more from each other than in previous plots for example the Frey 1983 study the effect size is not even on the screen so it's somewhere and it's more higher than 3 which is not credible effect size but that's what what the paper said and you so for example if you look at the pooled effect size and the red lines indicates the 95 confidence interval around the full effect size and if you then look at for example outliers you see there are quite some outliers and indicating that these studies you know they are in both ways positive and negative and there so there are all kinds of differences between those studies and if you again look at I square at a lower left side of the screen you again you here see that the I square was 80% and highly significant the next thing is that you can do a test of homogeneity so this test it's a Q test and it indicates whether there is significant heterogeneity most software for meta-analysis do this standards and you get a Q value and you get a p-value whether indicating whether there whether there is significance heterogeneity the Q test is getting less and less used nowadays because it's the problem with the Q test and the significance value of of it is that it's very depending on the number of trials so if you only have a have 10 or 15 trials then you may find that there is no heterogeneity but that just indicates that the number of trials is very small and you there is no way to solve it and then you find there's no your your your Q test it's not significance suggesting that there is no heterogeneity but in fact you don't know because the number of studies is so small so what is used more and more is that is the I square statistic and I Square is just a proportion of the total variance that can be explained by heterogeneity and a rough indication is that if you have 25 percent that's low heterogeneity 50 percent is moderate and 75 and higher is high heterogeneity if you have heterogeneity of let's say it's 75 percent that just indicates that these studies are not very related to each other and they are almost random effect sizes since a few years we also stimulate that you also calculates the 95 confidence interval around I square because again if the numbers of studies is small and you get you get let's say 95 you get an I square of 0 then it's still the the 95 confidence interview can be so broad that can range from 0 to 80 so you find a heterogeneity an i square of 0 but because the 95 confidence interval goes up to 80% then it's still very well possible that you have high levels of heterogeneity you just don't know because the uncertainty is too high so again here you see this is the first slide the first example I showed and here's heterogeneity is 0% but then confidence interval goes from 0 to 58% then so it's still very well possible that has moderate heterogeneity so if you have had originality how can you examine the causes of heterogeneity so you one thing you can do is moderator analysis and what you do in moderator analysis is that you examine the association between a characteristic of a study and the effect size and that may explain heterogeneity but you have to remember this is always in direct evidence I will show it later there are different types of moderator analysis you can do subgroup analysis you take two groups of studies two subgroups of your whole sample of studies and you examine whether these two subgroups differ significantly from each other you can do bivariate matter regression analysis in which you examine for example a the the association between a continuous outcome and the effects I will show later an example of this and you can do multivariate matter regression analysis and that's just basically the same as a normal regression analysis you can answer continuous variables and dummy variables and you can adjust for all the variables in one large model so subgroup analysis if you do subgroup analysis or you make subgroups within your whole sample of studies and examine whether these two or three or four subgroups differ significantly from each from each other what you hope is that you get if you have high heterogeneity for the full sample of studies what you hope is that you get get different effect sizes for the subgroups and that's within each of these subgroups heterogeneity is low that's that's the ideal picture it almost never happens that's my experience but that's what you what you would like to see if you do these analysis you can do these subgroup analysis using two methods you can use either the mix effect and mixed effects model in which you calculate effect sizes within the subgroup according to the random effects model the test that whether the the effect sizes between the subgroups are significant with a fixed effects model you can also use a fixed effects model for both the within group effect sizes and the difference between subgroups personally I think in psychological treatments again you should use the next effects model it's varied there is no good rule for which variables you should include in subgroup analysis it's very much depends on the number of studies you have the number of characteristics you would like to examine so if you have only ten studies it makes no sense to do five subgroup analysis because then you you have more analysis than your studies at some point so don't you should so if you have ten studies you shouldn't do more than three moderator analysis and it's very difficult to make a good judgment on what the moderators moderators are you you want to examine but if the number of study increases you can do more and more analysis and at least one of the things you should examine always is the difference is the effect of the validity or the quality of the study on your in your sample so this is a this is a matter analysis we did a couple of years ago internets CBT with or without supports compared to control groups so we collected all the studies on internet therapy whether they had personal support or not in which that the internet therapy was compared with a control group and we did a meta-analysis of that and we found that the overall effect size was point 41 but we also found a quite high level of heterogeneity so we get all kinds of subgroup analysis and one of the subgroup analysis was the most interesting one and you can see it here we found a difference between studies in which the internet CBT was delivered with professional supports and one without professional support so when there was no professional support you should you go to that website and walk through that intervention without any support completely on yourself and you just do all the modules etc etc so what we found is a strong deaf significant difference between the groups of studies with professional supports and those without professional supports and we also found that heterogeneity within these subgroups was very small as I said this not really often happens usually when you do subgroup analysis there are not as nice as you as you would like but this was a good example that it worked out quite well the other thing you can do is you can can do bivariate matter regression analysis and then you don't work with subgroups but you work with continuous variables so what you do then is you examine the association between the effect size and the continuous characteristic of each of the studies for example quality score number of treatment sessions the year in which the study was conducted things like that so I this assay and sample we calculated pre-post effect sizes for studies on psychotherapy for chronic depression and what you see here the circles are the the individual studies and you just see a regression line through it with the number of sessions on the horizontal axis the effect size on the vertical axis and you see that the effect size goes up with each additional session this is a good example of a matter regression analysis and what what you see is that the slope just indicates the association between the effect size and the number of treatment sessions and what is interesting is the slope so the slope was in this example was 0.04 indicating that with each additional session the effect size increases with point zero four you can also do multivariate s'matter regression analysis so you then you just answer all those characteristics of those studies into one large multivariate model you can use continuous outcomes and dummy variables just as what you do in a normal non meta-analytic regression analysis so what we did here what you see here is a meta-analysis which we did some time ago in which we looked at the association between the effect size of studies in psychotherapy for older adults with all with a series of characteristics of those trials and this is just a normal outcome of these of such a multivariate analysis ok publication bias which is not really related to examine heterogeneity but it is an important issue when you do meta-analysis publication bias refer Suda phenom phenomenon that studies are just not published and if you do a meta-analysis of published studies and studies with negative or zero finders are not included you're inclined to overestimate the pooled effect size which is in fact smaller than reality than the true effect size so we know that this effects the outcomes of meta-analysis and it's it's we do not know exactly the causes of this we know for example in Eric Turner Roden in 2008 an important paper in the New England Journal of Medicine in which he compared trials published on anti-depressant medication from peer-reviewed journals and he looked into the database from the Food and Drug Administration in the United States where pharmaceutical companies have to submit data on trials to get medications accepted to the American markets so and some of those studies were not published so he compared the published studies through the studies in the FDA database and he found clear and significant differences but who what the exact causes are are it's not clear so that can be you know the companies or have a clear commercial interest in getting the positive findings on medication published and they want to ignore the negative findings but it's also the end the journals who are often more interested in positive outcomes than and negative outcomes and it's also for authors often the case that I like to find positive and significant outcomes and are inclines just to forget or do not publish the results of trials with less positive results but that's all all may cause publication bias and that's a major problem in meta-analysis in the site in the field of psychology we do not have an FDA where unpublished trials are submitted there are so we have to look at it indirectly there are some ways to do it directly but I won't go into that now but you can we can examine publication bias indirectly just by looking at the data so I want to show you how you can do that and I will first get an example of a paper we published a couple of years ago on 117 studies on psychotherapy for adult depression so I will first show show how it works well this is a what you see on the horizontal axis is the effect size and what you see on the vertical axis is the standard error which is an indication of the size of the study so the more patients are included in the study the smaller the standard error is so in the top of these plots you see the studies with many participants and more down you see the studies with smaller numbers of participants if you look at it from this this perspective which you which is what you see is well if studies are small if studies are have more participants you assume that that those effects eyes are more near to the pooled effect size so they they give a more precise estimate of the effect size because they have more participants okay they can be more precise if you have smaller numbers of participants then you can assume that the effect size diverts more from the pool effect size because if the effect size you finds are not that precise as from large studies so they may divert more from the the pool effect size but if you look at it from that way then the smaller the study get I made Eifert more from the pool effect size but that diversion should be in both directions so you should should have studies with positive deviations of the mean effect size but you should have the same number of studies in the negative direction and as you can see from this plot that the studies the small studies with large effect sizes are on the right lower quadrants of this graph but then a lower downside that these studies should also be there but are not there and they should be there assuming this that this forest plot should be this funnel plot should be symmetrical so you can see only by looking at this plot that our s publication bias you can Darryl CMA for example allows you to quantify this publication bias so there is a method to impute the studies that should have been there but are not really there and then you get a graph like this and what you see here is that you have all kinds of studies the black dots indicate the studies that should have been there but are not really there and they are if you adjust for these studies which are I think 51 through something like that missing in this example and that's that that reduces the effect size quite considerably so we can by looking at the symmetry of this funnel plot you can get an indication of the impact of publication bias on the outcomes in your meta-analysis there are also all kinds of tests to examine whether the funnel plot is symmetrical or not so the dewfall and Tweedy's trim and fill procedure just imputes the missing studies and calculates an effect effect size which is adjusted for these missing studies other tests like beckham assume bar or Eggers test they just test whether the further the funnel plot is symmetrical or not so here found that the unadjusted the effect size was point 67 but if you adjust for publication bias the effect size was reduced to point 41 and we missed 51 studies and we also found that occurs and Beckham assumed Burris tests are very significant so key points here heterogeneity is a key concept in any meta analysis and it's the variability and the other intervention effects in the different studies and it's the level of heterogeneity is key to interpreting the outcomes of the pools effect size you can examine it by all kinds of methods moderator analysis subgroup analysis matter regression analysis looking at outliers and another key issue for doing that analysis is publication bias and if you do a meta-analysis you should look at the risk of publication bias as well
Info
Channel: Pim Cuijpers
Views: 3,341
Rating: undefined out of 5
Keywords: Meta-analyses, systematic reviews, mental health research, Pim Cuijpers, Vrije Universiteit Amsterdam, Psychological interventions
Id: yJ9Y3NRxD0Y
Channel Id: undefined
Length: 27min 42sec (1662 seconds)
Published: Tue Nov 15 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.