Dr. Simon Thornley - 'Can we resolve the saturated fat question...?'

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

TL;DR > A slightly mind-numbing field trip into meta-analysis epidemiology maths that shows how there is more than one kind of bias in reporting outcomes. And specifically how there is far more support against the diet-heart hypothesis than for it.

๐Ÿ‘๏ธŽ︎ 3 ๐Ÿ‘ค๏ธŽ︎ u/tsarman ๐Ÿ“…๏ธŽ︎ Sep 05 2019 ๐Ÿ—ซ︎ replies
Captions
hi I'm Sun and Thornley I'm an epidemiologist and public health physician and I'm looking at this question can we resolve the saturated fat debate this is something that I grew up with at mid school that was part of the woodwork in medicine at the time that saturated fat was bad it was clogging our arteries and we had to get rid of it since then I've had a look at the evidence and I think that the evidence that saturated fat is bad as extremely shaky and I've had some debates in the medical literature about this so we'll go into a little bit more detail I apologize in advance for the maths that does appear at some point we'll try and make it as accessible and interesting as possible let's have a look at a brief history of the diet heart hypothesis so the slide is a satirical but it's an encapsulation of the idea of the time when saturated fat was thought to be good it was encouraged it was something that we put on just about everything loud was spread on bread and me used like butter or margarine and saturated fat was just part of the fabric of society then there was ensel keys who developed his diet heart hypothesis he looked at the correlation between eating saturated fat and the incidence of cardiovascular disease at a country level and published the seven countries study and so on the right we see this apparently strong relationship as we know several studies were left out and on the Left we see no relationship if all these studies were included from other countries so nevertheless the message to avoid animal fat to avoid saturated fat took hold we were all told to eat more marjorine and this time article encapsulate the message of that time bacon and eggs were out now since that time we developed an obesity epidemic and this is from our ECD countries and more developed countries at teen Domitius obesity and you can see that this relentless trajectory of increase in obesity since the 1980s two relatively recent times and it was strongest in the US and the UK but really affected a whole lot of other countries not so much in Asian countries as you'll see so a lots been written about the obesity epidemic since that time there's been a exoneration of saturated fat and we've been told to eat butter again so let's dive deeper and see what this is all about so let's think why why do we use meta analyses that's really the top level of the medical evidence what is it designed to do we'll be talking about some of the technical stuff around meta analyses fixed effects random effects models why do they matter and we'll have a look fresh look at an influential study in the Cochrane Database the Hooper study and see take a critical look at that data so why do we do meta-analysis it's a systematic integration of all these trials which are coming out very difficult to keep on top of all the accumulating evidence for clinicians these days it's distinct from an expert review it's seen as more objective we include quantitative number evidence from that it's a synthesis of the published information and it's usually only considered appropriate for trials and we'll talk about that a little when more and allows researchers to keep abreast of the accumulating evidence it also may offer a resolution when research disagrees there's often one group who have a strong belief in one direction can pull one meter an hour one trial to support their viewpoint another with another viewpoint can rely on another trial sometimes the meter analysis may be able to resolve that uncertainty or those different viewpoints it also increases statistical power so it enhances the precision of these effect estimates that we'll look at a bit further so there's also growing influence of meta analyses so meta-analysis of trials are usually considered the pinnacle of evidence so when people are putting together treatment guidelines they often refer to meta analyses and they often also used to resurrect disappointing trial results so if trials didn't show an effect that investigators wanted to see then they pull together a whole lot of disappointing trials to see if they can make a trial a summary of those trials or pool effect that is statistically significant how did I get interested in meta-analysis my PhD was on the primary prevention of cardiovascular disease with drugs and there was I came across a study a meta-analysis of the primary prevention of coronary heart disease which was showed that giving aspirin result in a 30% decline and CVD events that's great however the hip reading there was a 60% increase in severe bleeding so it's bleeding into the head or the gut and there was no difference for overall mortality so the statistical evidence here is not particularly supportive for using aspirin however the conclusion was esperan is safe and effective for the primary prevention of cardiovascular disease so you saw this narrative that was out of step with the statistical evidence and then you find that and in many meta analyses in the saturated fat one is another example so we were I wrote a letter to the journal that had published this article because there's 300 citations and these were often and guidelines and I thought this really needs to be addressed and initially the journal accepted it but then it rejected it without explanation so I seem to to the higher journal the BMJ and they decided to accept it because it wasn't criticizing one of their papers I think so that this disconnect between the narrative and the evidence made me interested about meta-analysis in their just the overwhelming influence in modern medicine so let's just step back a bit think about what meta-analysis is all about we have this belief or hypothesis and we have trials that inform that hypothesis and we summarize those trials into a meta-analysis and then we update that belief so how does maths help us here this is a little crash course in epidemiology unfortunately we can't avoid that when we're looking at mirror analyses so basically a randomized trial is about having at least two groups of people we randomized them into one group or another low saturated fat versus usual care and then we follow them up and see what their speed of diseases if we thought about it like a car race comparing Fords and Holden's we'd measure speed in the two groups or average speed in epidemiology we measure average speed of disease or we call that incidence and that is simply the number of events that occur divided by the number of people at risk and then we take the risk of disease the speed of disease in both groups and we take the ratio and the ratio gives us an idea of how strong the evidence is in support or negation of our hypothesis so a relative risk of one means no difference in speed of disease or incidence between the two groups in the case of the saturated fat meter analyses a relative risk less than 1 means that saturated fat is protective that it's a good thing to do to limit saturated fat or a relative risk greater than 1 means that saturated fat is harmful so though that's a little bit of the background for those who haven't done much epidemiology what about statistical significance most people start glazing it their eyes glaze over when we talk about statistical significance so it's based around this idea of how likely are the results that we've got the play of chance so if we take the simplest example we're tossing a coin and we all know there's a 50/50 chance of getting heads or tails and even if we did that 10 times it's possible given the 50/50 chance that we get 10 tails in a row it's a bit unusual but unlikely but it's possible if we did that 10,000 times and we got tails each time then that's so unlikely that we might reject our idea that the coin is actually fear and we might think about an alternative hypothesis and somebody's developed a very sophisticated coin perhaps that involving some magnetic manipulation that makes the coin not fear anymore so it's the idea behind statistical significance so if you we have this null hypothesis that there's no influence of saturated fat on heart disease and we look at how likely are these results given this no influence idea if it's unlikely like the 10,000 tosses of a coin that are all heads then we reject the north influence and we say that yes there is an influence it's just like coin flipping so the dreaded p-value that most people know about answers the question how frequently would we get our results or relative risk or more extreme result F eating saturated fat truly did not cause heart disease so if the p-value is less than 0.05 which is 1 and 20 times then the no effect idea that saturated fat doesn't influence heart disease is rejected and you can also assist this with the confidence interval so if the confidence interval for our relative risk includes the null value of 1 that's equivalent to a p-value of less than 0.05 so this were what what's the the idea behind using p-values it's about making decisions about how our hypothesis given the evidence that we've collected from our trials and so it's helping us at decision time to mostly make the right decision so we have a table here where we have our trial results which are either significant or not significant based on the p-value and we have this idea of a universal truth that eating saturated fat does not cause heart attacks it's the null or no effect hypothesis and that can either be true or false now if encapsulated that with the time and magazine covers and mostly we want to head into the green corners so we want a not significant result when the null hypothesis is true in there's no if and we want a significant result when the null hypothesis is false and there really is an effect but occasionally we get into the false positive or the false negative domains and we fix those probabilities as being the false positive is by definition from the p-value one and twenty times and the false negative is dependent on the statistical power which is related to the number of people in the study and the number of events and so meta-analysis is often used pulled out in their bottom right-hand corner where we've got disappointing trial results that the investigators think is a false negative or what we call a type 2 error and so we can deal with that issue of low numbers and low numbers of events by taking pooling all these events together and taking the average and that gives us more power and that can resurrect these disappointing trials but you can see already when we're turning to meta-analysis we're going outside the conventional interpretation of the trial we're really stretching things trying to get the most out of these disappointing trial results so these rules that we've made they don't we can't always make the right decision about the truth in the universe given our trial results but mostly we make the right decision so single trials are really easy to interpret when there's large results and if you look at historical trials on treatment of tuberculosis for example there was just huge relative risks of five six seven and really easy to interpret that there was a effect of the treatment in modern medicine we're often dealing with these very small effects relative risks around one point 0.9 0.8 and it's very difficult to distinguish these small effects from no effect and this is where a lot of the controversy is and this is where meta analyses often come into play so what is the magic source of meta analysis what do we do with these numbers well it's pretty basic really we assume that the relative risk of each trial is common between each study and that's what we call our measure of effect or measure of Association and we take the average so if you want to sum up a lot of statistics that's really taking the average of all range of things to get a more precise estimate of the true relative risk and that's all meta analysis is about very simple and we all know how to do an arithmetic average right we just take the total we add the more we sum the total and then divide by the number of individual trials and that would be great unfortunately epidemiologists make things slightly more complicated we take a weighted average because the trouble with trials some have a lot of events and a lot of people in them and some have fewer events and few people in them so we assign them or wait based on the precision of this of the relative risk and that the wait is a technical term it's derived from one over the variance so the inverse of the variance so the the list session we are about the true relative risk the less weight that study gets in the meter analysis and that's how it's dealt with but now that would be great and it works for some meta analyses but not all and in fact only a very small proportion can we use that technique we get a fly in the ointment called heterogeneity which basically is that this pulled relative risk that we're trying to calculate is too optimistic this is systematic error in there and and the usual fixed effect method looks at random error alone so there's these systematic differences that are beyond what would be predicted if it was just random error alone we're getting it a little bit more technical here we can measure this heterogeneity with a term called I squared and anything over about 30% is too much to be generally acceptable whether fixed effect or standard analysis and there's two different types of analysis the fixed effect which is fine when there's no heterogeneity or little heterogeneity and then we use pull out the random effects method which makes the confidence interval of this pooled or average effect a little bit wider to account for this heterogeneity and here's the kind of pictorial summary of what what we're trying to do in the with these what what the assumptions we're making in these two methods of analysis so and fixed effects we're assuming there's one true distribution of effects and we've got these distributions from each individual trials whereas in the random effects side of things we're assuming that each trial has come from its own little distribution of its own family of trials and then we're just taking the mean of these family of trials and it all makes good statistical sense and I thought it was great way of looking at meta-analysis until I bumped into this guy had an epidemiology conference so this guy professor Sohail Dewey based in Canberra and Australia raised the issue that random effects methods exaggerate the effect of all studies which are more likely to be subject to what we call the file draw problem which is people not publishing things which contradict the via hypothesis and there's a it's a very he's developed a method which is a subtle refinement on the fixed effect method and so basically it uses the standard weights and it just increases inflates the confidence interval at the end of estimation and it's it's a very subtle difference from what we usually do so let's have a look at the saturated that's the method side of things let's have a look at the evidence between these two conflicting hypotheses so you know I said meta-analysis can sometimes resolve the differences that the scientific community have unfortunately in the saturated fat area you can the enthusiasts the saturated fat will choose the hooper study which supports the idea well actually one of the three pooled effects so just CVD events not CVD mortality not overall mortality support the saturated fat hypothesis whereas on the no influence side there's a whole bunch of meta analyses which don't show an effect so again is conflict and so let's have a little look deeper at the Hooper study so here we've got overall mortality and we see a relative risk a pooled relative risk if you look down at the bottom you've got all the trials marked in the central column in the line is on the no effect Marg no effect estimate relative risk of one where the incidence of disease or speed of disease is the same in both groups and it's plum in the middle so absolutely no benefit from reduce in saturated fat cardiovascular disease mortality this forest plot shows exactly the same thing CVD events you can see that all the individual trials disappointing right on the no effect but when you pull them into this little weighted average you see a subtle benefit with about a 20% reduction or a relative risk of pulled relative risk about 0.8 so the Hooper conclusion then is our findings are suggestive of a small but potentially important reduction in cardiovascular risk on reduction of saturated fat intake so this is really the headline that gets put in Dietary Guidelines and in the traditional nutritional mantra so let's just think about it from an epidemiological perspective we've got three outcomes we can choose to give more weight to any one which outcome is likely to be the least prone to measurement error there's a overall mortality cardiovascular mortality or CVD events well most doctors I think would have least certainty certifying death there's a little bit of uncertainty when it comes to CVD death and there's even more uncertainty when it comes to deciding whether the CVD events and there's different cultures of diagnosing CVD in different countries so my the most convincing evidence from this a conventional interpretation of this meta-analysis is that the overall mortality and the CVD mortality least biased then likely to be less biased than the CVD event however the CVD event outcome is often used as the support for the for the hypothesis here's some gory details for the mathematicians in the audience what you can see here is that the the event the new method which Dori came up with uses the same wait so it's what every trial is weighted by the the variance of that estimate whereas in both the IV hit which is his technique it's the same as the fixed effect method whereas the random effect method introduces this between study term and the variance which has the effect of equalizing all the weights and it also has the the effect of changing the weight depending on what other trials are included in the meta-analysis which if you think about it there's not a logical thing to do so move over from the gory details and let's have a look at what happens when we analyze we analyze Hooper's results with this new inverse variance heterogeneity method so up the top there you've got the standard results and we see a benefit from the fixed effect method a benefit from the random effect method but no benefit when we use inverse variants heterogeneity so this then and and it's consistent for all three outcomes so it resolves this issue that has been inflaming the debate about saturated fat what about publication bias just quickly one other little quirk of meter analysis is that we can look for bias in the scientific community and there shouldn't be a relationship between the precision of a relative risk and its value and if there is no relationship we should see this nice funnel plot when we plot each individual trials value and its variance and we should see that the larger studies are up the top and they're closer to the true value what we see with publication bias is this bite that comes out from the smaller studies and there's a hole as you can see in them in the middle slide and this is what we see in the saturated fat and Hooper's meta-analysis we see that there's far fewer small trials on the right than there is on the left so we've got as we know a scientific community that's heavily engaged in trying to support this hypothesis so where's all this leading basically the positive finding in the Hooper meta-analysis that's often trotted out in support of the saturated fat diet heart hypothesis as I believe an artifact of rewetting smaller more biased trials in the random effects model when we resolve that by using another method the inverse variance heterogeneity which his theoretical reasons for being a better method the this aberrant result disappears so will we get some final resolution I for me I think the questions closed I think the they'll still be debate in academia for years but I think the weight of the evidence is strongly strongly in the supporting the idea that saturated fat is not harmful what about the big picture well Rob Kean and I published a paper some years ago that drew attention to the fact that there's a lot of evidence that eating more sugar and starch contributes to heart disease gout rotten teeth cancer weight gain diabetes high blood pressure high triglycerides low HDL all things which are bedfellows of cardiovascular disease so some pretty strong evidence there that sugar and starch causing some mischief on the other side we've got saturated fat I've just shown that there's a very questionable relationship with heart disease there is some evidence that it increases LDL but absolutely no link that I could find with any of these other risk factors so I think if we zoom out and look at the big picture now it's heavily supporting the sugar and starch as a cause of cardiovascular disease so finishing up progress with meta-analyses methods I think resolves us about these apparently contradictory results when it comes to saturated fat meter and our minimum analysis I think you need to be skeptical when you look at meta-analysis they're often very influential but they're often used to resurrect what are fundamentally disappointing trial results I know it's boring but considering the methods of analysis is often important because subtle differences can make quite dramatic differences to the overall results and thus the interpretation so for those who are wanting to dig into the gory details this has been published internal medicine thanks very much for listening [Applause]
Info
Channel: Low Carb Down Under
Views: 33,008
Rating: 4.7463675 out of 5
Keywords: Low Carb Down Under, LCDU, www.lowcarbdownunder.com.au, Low Carb Medicine for Doctors, #LCMFD19, saturated fat, Hooper study, meta analysis, epidemiology, relative risk, p-value, heterogeneity, cardiovascular disease, publication bias
Id: 9FROFwHjEms
Channel Id: undefined
Length: 30min 56sec (1856 seconds)
Published: Wed Sep 04 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.