Synthetic control methods: Introduction & overview of recent developments - Dr Carl Bonander

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's get started thank you again everybody for switching over to microsoft teams and my name is mark franchim i'm a departmental lecturer and social research officer here in the department of social policy unit intervention um and this is off the first in our series modern methods and social policy and intervention research and i'm very pleased today to be welcoming as our first speaker dr dr carl berlander who's going to talk about synthetic control methods and he's going to give us an introduction and overview of recent developments and uh thank you ever so much carl for for being here but also for switching over platforms as well with a few minutes notice and being so calm about it thank you ever so much um so carl is a university lecturer in the school of public health and community medicine at the university of gothenburg and his research is focused on injury epidemiology and injury prevention from a societal perspective and he's particularly interested in the development of methods um for identifying the causal effects of societal safety interventions as well as socio-demographic inequalities in health and safety and also carl is is not uh not a stranger to the department because he's co-authored with several members of the department um and later on i'll be calling on and one of his co-authors michelle douglas prosti who's a post postdoctoral researcher uh in the department to uh to kind of offer her reflections on synthetic control methods as well um so carl i think you're going to speak for about 45 minutes i think you know give or take um i will call just to let you know i'll turn my my uh my video off while you're talking but about five minutes before the end i'll pop up just to remind you you've got about five minutes to go and then we'll have um you know 40 minutes or so for some for some questions and and discussion um but carl can i uh can i leave the the kind of virtual floor to you and thank you very much sure thank you so i'll just share my screen here again so we're good you can see that you can hear me yes we can see that that's great all right i wouldn't ask one thing before i start can you see my mouse when i uh yeah i can see your mouse yes okay good yeah good because i'm going to point at some stuff all right thanks all for uh joining us um so in this talk i will uh provide a non-technical introduction to the synthetic control method um kind of target audience for that part of the talk is basically those of you who are new to the method or maybe have heard of it but don't really know exactly what it does um and i realized you know this is a broad audience here so i'm gonna try to make this interesting for for as many of you as possible hopefully at least uh so towards the end of the talk i will also provide a little bit of an overview of recent advancements in the synthetic control literature uh maybe aimed a little bit more at those of you who are familiar with the method but want to learn more so whenever you hear people talk about the synthetic control method or write about it these days you'll probably see them cite this paragraph of a review paper by athen embanks published back in 2017 uh where they write that the synthetic control approach developed by a body diamond and hind miller and the and gardasia bull is arguably the most important innovation in the policy evaluation literature in the last 15 years uh so that's a that's a strong statement uh we'll see towards the end of this talk when we get to the discussion if you agree or not so just to give a quick overview of the the history of the method uh so it was actually introduced back in 2003 by abadian gardasil we'll look at that paper later and see a little bit about the case study that they look at but what they do is they study the economic costs of conflict in the basque region in spain and they felt that they needed a new method to do that credibly which is why they developed the synthetic control method uh it was actually in 2010 so seven years later it was it was formalized by a body and two other co-authors diamond and mueller and at that point they also released software so that you could actually apply the method and before this talk i did a quick circle just to see like how many how many people uh actually publish papers with the keywords or or a title or or abstract um that mentions that's a synthetic control method uh we can kind of see that from that point it's kind of increased exponentially over time so i guess they've done something right here um to to make people that interested in using the method and after that point there's been some interesting developments in the literature that i will go through briefly at the end of this end of this talk so just to set the stage um and kind of circle back to what it actually what what is it trying to do um so suppose that we want to estimate the the effect of some policy change on an outcome and that could for instance be the effects of florida's stand your ground law which is a or it gives people the right to use deadly force in self-defense i'm simplifying here a bit i think but that's the general idea and i think the main purpose of that type of law is uh to to deter from crime uh but you know there's a debate in the literature here because it does seem like you know maybe it actually increases uh these homicides uh in some states uh one of them may be florida so i i picked this as a kind of case here because some of you may be familiar with this intervention since it's been studied by uh some some of the researchers at your department uh so i figured you know why not run with something that's familiar um so to estimate effect we need to try to estimate the counter factor so what would have happened without this standard ground law in terms of homicide rates in florida and this is just some some raw data uh so here we see homicide rates per population in florida over time and the stand your ground law was implemented in october 2005. you can see that in conjunction with that homicide rates actually appear to have increased which i don't think was the intention here uh but is this an effect this is a causal effect of this stand your ground law or good homicide rates have increased anyway uh this is the main question that we're trying to answer when we try to estimate these the effects of these types of policies so a typical empirical strategy in that case would be to maybe compare florida to other similar states uh but the the credibility of that type of study and those type of effects estimates that you get from those studies will depend entirely on how comparable those states are to florida so how do we identify the most uh appropriate comparison states uh so in in these settings i would say that validity is typically judged on on two criteria so when i say validity i mean the risk of bias we want that to be as low as possible uh and both are related to similarity so similarity between florida in this case and uh whatever comparison state or states that we choose on well two key things so one would be the predictors of the outcome that could be socio-demographics or gun availability or crime rates or anything else that we think is correlated with homicide rates but maybe even more importantly we want uh these comparison states to be similar to florida on temporal trends on the outcome uh before the policies change so we want to see something like this where uh the intervention state and whatever comparison that we use seem to follow the same trends before the intervention takes place otherwise they're going to provide a pretty unconvincing comparison like if we have completely different trends on the outcome they are driven by changes in a lot of other things and they will probably differ a lot uh on on other things that we can't observe so we we don't we don't want that uh so say now that we have the the option to choose between 15 other u.s states that don't have similar self-defense laws we can then maybe compare to the average of these states which is what we have in the plot here so you know maybe that could work in this case but one thing that you might notice is that they are completely different levels to begin with so maybe we want a comparison state it's a bit closer in terms of homicide rates to to florida before the intervention so maybe then we can pick a similar state here we have new york uh it's not super convincing as well i would say um or you know we could pick an average of a subset of well selected states uh but you know there's nothing obviously biased in doing any of those but manually searching for the the best choice is a bit unappealing um because well for one one it can be pretty time consuming and uh at least for me i'm i'm a big fan of letting the computer do stuff for me if it can to save time uh but but more seriously though it can can lack in terms of transparency and maybe formality if we don't set up any objective criteria for picking the best control and you know the worst case you might end up with picking the the control that gives you the results that you want and we don't want to try to avoid that as much as possible and also the final choice may actually be sub-optimal because we didn't consider all possible combinations of states so this is basically what the synthetic control method tries to address so it uses a data-driven algorithm to search for the the best available comparison in the in the data uh it does so with the with the goal in mind that the optimal control should approximately at least match the intervention states of florida and our example on two things like the one the things we mentioned earlier were uh important pre-intervention predictors of the outcome and pre-intervention trends on the outcome and it does so actually by by conducting a data driven search among all possible weighted combinations of comparison states in the data which expands the search a little bit beyond what we considered earlier when we talked about raw averages and comparisons with single states and it would be pretty difficult to search among weighted combinations manually so so this is really where you see why a data driven method could be it could be a good choice um so the idea is to find the optimal set of weights for this weighted average that construct a new comparison state from the data which is where the name the synthetic control method comes from so we're trying to could construct a synthetic control state so in this case a synthetic florida without a stand your ground law and it's basically a weighted average of other states in the data which uh it's referred to as a synthetic control and this this weighted average should if everything works as it was intended it should be as similar as possible to florida or whatever intervention the state you're looking at before the the policy change so just to see a little bit more closely what it's trying to do without going into too much of the technical details um so the objective here is to find the optimal weights that minimize the difference on on two things that we mentioned so first the pre-intervention outcome trends so this data before the policy change so we wanted to match that in terms of both trends and levels actually and secondly and i guess this is actually technically optional you don't have to include this to run the the uh synthetic control method but in most applications you'll also see people want to match on other predictors that we think are important for for uh the synthetic floor that we're trying to predict so we want it to be as similar as florida as possible on some covariates as well and in this case you know we can be unemployment rates uh it could be a number of republican voters in presidential elections firearm ownership violent crime rates and anything we think could be associated with uh homicide rates and it's going to actually do this algorithm is actually pretty pretty advanced so it actually runs a bi-level optimization to to also determine the importance of each of these predictors so it's going to try to find and prioritize a good match on the ones that are most strongly correlated with uh with the outcome and it does that because well often oftentimes we have pretty pretty limited data so in this case we have 15 states and we're not going to be able to perfectly match on each each one of these so it's going to try to prioritize the ones that are important uh but so so if you're wondering about the details of exactly how it does that i can i can recommend looking at the first paper uh that introduced the method the appendix and that paper has some some details on how it does this um but so in in the end so if there's one take-home message here about about this whole slide it's that what we end up with is a set of unit weights that have been optimized to to create a weighted average from this data that will as closely as possible resemble florida in the pre-intervention period and what that can look like is this so here i've just run the the algorithm and it and it spits out some weights and these weights can be interpreted in a relative manner so 0.35 here for instance which is assigned to new york means that new york contributes has a relative contribution of 35 to this weighted average or synthetic control state maryland has a relative contribution of 26 so the synthetic control is a weighted combination of these states and if you get a weight of zero here you're basically excluded so it's determined that okay these states are too different uh to be included so so they've been excluded and what this can look like then in the end uh is something like this so now we have a synthetic control state synthetic florida that resembles the actual florida a bit more on on the pre-intervention outcomes and trends than these other alternatives we looked at earlier and well the the what we can see here in terms of results is that okay so the homicide rates appear to have increased more in in florida than they did in these weighted average of similar states so we would interpret that as you know maybe that actually is an effect of this policy so just to see a little bit more closely what what it actually does i've just simulated some some data this is completely made up data on some some outcome variable y uh that we can you know we can pretend it's homicide rates uh it's measured over time in different say states and we have a an intervention state it's a black line here uh that experiences some some intervention or policy change uh at around the halfway mark and because this is completely made up we know what would have happened in in absence of this policy change and so this is basically the line here the dashed line uh what we're trying to estimate we're trying to estimate what would have happened without the policy change and the effect is then the vertical distance between what actually happened then and what would have happened uh and so the synthetic control method runs the the optimization on this training period is what i've called it here with the pre-intervention period to find these optimal set of unit weights so each of these lines are going to get one weight each of these states get one weight and taking the weighted average of those of those lines per time point uh so so we take the weighted average of these lines per time point and then we end up with this estimate of the counter factual so what would have happened in absence of the of the intervention and you know since this is simulated data we know what would have happened and it matches it pretty closely but you know of course in real data we never know for sure what would have happened so we always have to keep in mind to consider other biases like other things that happen at the same time but this is the general idea that this weighted average of these lines here it's trained on the pre-intervention period should match the treated state or intervention state as closely as possible before the intervention and then if they diverge you know we might interpret that as an effect so some contextual requirements for this whole thing to work uh you're gonna need to have access to balanced panel data which is basically data that looks like this so you need different time series from both before and after the intervention on the uh state that or region that changes its uh policies and a couple of different control states that do not uh and we're going to need to have at least two potential controls for this whole thing to make sense it doesn't make that much sense to take a weighted average of one control uh but you know you will typically use more uh but you know there are no hard or fast rules here but maybe anywhere between 10 and 50 is pretty typical in these applications of the method and preferably we also want to have a pretty long pre-intervention period uh in the data to to give the algorithm a chance of actually matching on trends so if we move back to this slide here uh you know we wanted to be able to pick up on these different dynamics in the data and if we if the time period is too short it's not going to be able to distinguish between good or bad controls in terms of trends but there again there are no fixed rules i think it depends too much on the context and dynamics of the outcome that you're looking at to to be able to give any fixed rules about that but you know preferably long so what this means is that the method will typically be applied to study regional interventions using within country registered data so maybe mortality data is usually measured over time by region so if you have a homogeneous type of register like that you know it might be possible to apply the method in some settings you you might also see studies of national interventions but you know that would require pretty good cross-country data that's comparable and and measured in the same way over time so let's just look at some empirical examples get a sense of what the what the method has been used for so i figured we could start by by looking at the first study where they where they uh suggested the approach uh so it's a paper by alberto badi and javier garcia where they study the economic costs of conflict [Music] they use the terrorist conflict in the basque country as a case study as the basque country as a region of spain that between 1968 and 1997 they experienced a lot more terrorist attacks and conflict than the rest of spain so there's just a quick comparison between the basque country and the rest of spain in terms of deaths per million inhabitants per year uh due to terrorist activities and it's about 40 percent of 40 times as much in the basque country so they wanted to know did this outbreak of terrorism affect regional per capita gdp and their empirical problem that they encountered was that there were really no great comparison regions that they could use and the basque country is quite a bit different from the rest of spain or at least it was in in the 1960s before the terrorist attacks um and at least in the terms of of of the the outcome you see a pretty large difference uh where uh where the basque country has had a had a much higher gdp per capita than the rest of spain uh also has some had a larger population density and other economic predictors also differed quite a lot so they wanted a comparison that was more similar so their idea then was to create this this method and construct a synthetic basque country as a weighted average of other spanish regions and what they ended up with was was this uh where uh the synthetic basque country that they they found using their suggested optimization algorithm uh was it was a weighted average of catalonia and and madrid where we can see in this plot here that it follows the actual catalonia the solid line or actual basque country i mean and it follows it closely in the pre-terrorism period but as the terrorist attack started to spike somewhere in the 1970s 1980s they start to diverge in terms of gdp per capita so we would interpret this as an effect of these terrorist attacks and another way to present this that's kind of common in uh synthetic control applications is to take the difference between these uh lines so so the outcomes in the synthetic control versus the outcomes in the actual uh to to estimate the effects per time point uh so in the before period we would expect these to be centered somewhere around zero but once once you implement the intervention once the intervention is implemented or in this case when the terrorist attacks really start to spike you know we would expect to see something happen if there is an effect and what they find here is that the gdp per capita is reduced by about 10 percentage points at that time and well their synthetic past country is also a bit more similar to the actual basque country in terms of these other predictors so a more recent example uh which i think is was really interesting is a study from from germany where they looked at uh the effects of regional face mask man and where some some regions of germany uh implemented or made it mandatory to use face masks in public transport and shops earlier than others so this is just an example from a moment one of these regions and i believe it's a city uh in terms of cumulative number of of kobe 19 cases where in this region they they mandated the use of face masks on april 6 so pretty early on in the pandemic uh and what they find is that compared to a synthetic control region that is very similar to to the city of gina before this mandate uh gina started to level off in terms of the cumulative number of cases versus these other the weighted average of these other regions where it continued to rise so they interpret this as a pretty pretty large effect of this so what we haven't uh discussed uh what i haven't mentioned i mean uh so far is what you would typically see in in many of many of these quantitative impact evaluation studies is um p-values and confidence intervals uh you know are these effects due to chance uh we might want to know that um so it turns out that you know it can be pretty tricky to get valid p-values and confidence intervals for data-driven methods in general uh especially in data if you only have a single intervention state uh so what what they suggested you do instead uh is to assess uncertainty using what they call placebo studies so that's what one of the things they did in their 2010 paper and so in this paper they look at the effects of california's tobacco control program called proposition 99 on on the per capita cigarette sales in in california and they used the synthetic control method to create a synthetic control california and find that uh well they find evidence of of an effect in the expected direction uh so there seems to have been a reduction in cigarette sales in california after this point uh versus this synthetic control and another way to to uh express that is this is a variant of what we saw earlier it's just the difference between the actual california and the synthetic california in terms of cigarette sales over time and we would expect to see that this is around zero in the pre-intervention period and then they start to differ which we could then interpret as this could potentially be an effect but what's the significance of this could this have been found by just chance so if we just randomly pick a state in in the data and then apply the synthetic control method you know maybe we would have found something similar there so that's what they the general idea behind a placebo study is so it's pretending that the intervention took place in another region at the same time and then running the same analysis there so here they've done that in the 2003 paper on the cost of conflict where where they ran a similar synthetic control analysis in a placebo region or a placebo study on catalonia to see if they could find similar effects on gdp per capita there and well the the synthetic the catalonia and the actual catalonia seem to follow each other very closely throughout the entire period so there's no similar evidence of an effect there which you know it lends some additional credibility to the to the main result in the basque country so what they suggested in the 2010 paper was basically a generalization of this where you loop over all possible states in the data run a synthetic control analysis there and then compare the size of the effect to the effect in the actual treated unit when you do that you get something that looks like this usually uh so this is a bit messy uh but what we can see here is the uh the plot that we saw before so the black line here is the actual estimated effect in california then the gray lines are the estimated effects using the same approach in the the all of the other comparison states in the data and you might notice uh these really crazy lines over here in the beginning so what happens here is that sometimes the synthetic control method will fail to find a good uh good match and then you'll end up with something like this so in some of these states the data you're not going to get a great match and then you end up with lines looking like this and to make to to use this as an inference method you would probably want to handle that somehow but the general idea is that you know we'll compare the distribution of effects in these other states we compare the actual estimated effect to these uh distribution of fake effects in these other states uh so as a way to handle the fact that we have these uh forfeiting synthetic controls uh what they suggested doing and i realized now this is you know this is probably a little technical uh but you know i'll i'll try to try to explain it but the idea is to compare the ratio of the average absolute difference between the actual outcomes and the synthetic control and the post period versus the same metric in the pre period so if we look at this plot again we would take the the average of the the absolute values of this or more technically it's actually the root mean squared error but it's basically almost the same thing uh so you it would take this makes it signed less so it's not going to matter if it's positive or negative it's really what i mean by absolute values so you take the average of that in the post period and the average of the absolute values in the pre-period absolute differences and that's going to give us a kind of a size of the post-intervention effect that's relative to how different the uh the synthetic control is to its actual values so what that means is it's going to down weight these really poor fitting units and compare california to ones that are equal equally good fit so this will give us a measure of the estimated effect compared to pre-intimation fit so a big effect in a state with a good fitting control will be prioritized uh over a big effect in a state with a poor fitting control and what you do then with these uh with these ratios is you can use a histogram like this and uh check where where is california in this distribution california here is the actual treated unit we can see that relative to pre-intervention fit the the estimate in california is exceptionally large compared to these other states so can you use this to calculate somewhat of a p-value uh that's what they do and i tend to interpret this as a you know probability of finding an effect that is equal or greater size than in the actual intervention unit in this data so so in this data they have 39 states and california ranks at the top of this list so you would take one over 39 and you get a probability like this but if you know california was in second place if there was another fake placebo state that had a larger effect you know that that value would be higher and you would put two over 39 here and get a higher probability of finding an effect of equal or greater size to the effect we see in california so that's it's a bit it's a bit tricky to con to assess uncertainty in these studies uh but another another way of trying to do that is you can also shift the intervention date so that's another thing they suggested uh to a placebo date before the actual intervention so here we have a case where they look at the effects of the reunification of germany on gdp per capita in west germany and basically the idea is that okay when we have a really long pre-intervention period you know we can we can shift the date of the reunification in this case you know back to 1975 train a synthetic control in the pre-period using this fake placebo date and see you know do we get any evidence of an effect this period where we should find none and in this case they do not so that would also lend some additional credibility to to the confidence to the main estimate all right so just gonna get into the second part of the talk now focusing a bit more on on generalizations and other developments um so i'm just going to go through a couple i'm sure this is not going to be comprehensive so you know if you have if you know of others i'll be happy to hear about them later so i'm just going to go through these fairly quickly uh i won't be giving all that much details but i will provide references so if you want to learn more you can you can have a look at the references but so one important generalization is that the the original synthetic control algorithm could actually only be used for data with a single intervention unit but many interesting policies are implemented in multiple regions possibly at different time points so for instance the staggered implementation of the marijuana laws in the u.s so you might have this type of data where we talked about this before which was the case where we had a single intervention unit but more generally you might have multiple intervention units with either a simultaneous adoption every every state or region implements in the same year or staggered adoption which is probably more common case where different states implement at different years and a way to handle that was provided by by zoo in 2017 a really nice paper called the generalized synthetic control method uh where he provides methods and algorithms to generalize the synthetic control method to these cases where you have either simultaneous or staggered adoption multiple units so the idea is really it's conceptually similar but what we get in the end is average effects relative to the policy change so in this example here he looks at the effects of election day registration on voter turnout in the united states uh where you have estimated effects relative to the reform in term relative to the uh and he estimates that about well the the the voter day uh election day registration appears to have the increased uh voter turnout by about four percentage points you may also notice here that we actually have confidence intervals which as i mentioned mentioned before can be pretty tricky in these methods but it turns out that it's not as complicated if you have multiple units because then you can rely on the bootstrap that's what it does there so another kind of nice generalization is that if we want to look at multiple outcome measures uh you know it's typically better to train a single controlled synthetic control to match on all these simultaneously so otherwise we might end up with different synthetic controls for each outcome variable and that's going to be a bit messy but it turns out that it's pretty straightforward to to extend the synthetic control algorithm to match on multiple outcomes and software and and theory was uh for that was provided by becker and closener the paper that cited here um and she also mentioned that i will give a list of like r packages that you can use to apply these different uh methods towards the almost the last slide of the talk so i'll show you there so another interesting development is bias correction methods so as we saw a little bit earlier when we looked at these different placebo studies the synthetic control method can sometimes fail to provide a fully convincing control so sometimes there will be a lot of imbalance left on on important predictors or on the outcome trends so a method suggested by by ben michael at all suggests a way to correct for that it's called the augmented synthetic control method uh it corrects for any bias due to residual imbalance and does so by by combining the synthetic control method with outcome regression models um which is uh you know it's not going to make a lot of sense to you we're not familiar with what with this but the key take-home message here is that you know this could be a potentially useful compliment in cases where the original method fails and for those of you who are familiar with doubly robust estimation or augmented inverse probability weighting uh it's basically the same idea so here we have just an example i would say the imbalance is not extreme but this is what the original synthetic control method gives in in this case you have some differences here in the pre-intervention period that you might want to handle and in their augmented version it corrects for it for some of that um so another kind of similar idea uh but it's a little bit of a different class of ideas i guess uh but the original synthetic control method can actually only be used if the the intervention unit is at the it cannot be used sorry if the if the intervention unit is at the extreme end of the distribution of states so if we think of this as uh the data that we're going to use we need to have a case where the intervention unit is within the distribution of the control data so the original algorithm is only going to allow for interpolation within these lines it's not going to allow for extrapolation so if if the this is the treated unit the original synthetic control method will not be able to find a good match and this is because the unit weights that it optimizes are constrained to be non-negative and sum to one which is actually by design because they want to avoid doing this they want to only use the method if uh you can construct a synthetic control based on interpolation to avoid extreme counter factuals because if you start extrapolating too far from the control data you might get some really strange results so you know their argument was that don't use the method if you have this case but recently uh alternative approaches have been uh have been suggested that allow for some extrapolation but it will penalize it in the optimization so that if if it can construct a control based on interpolation it will do that but if it can't it will rely on some extrapolation to correct for bias and there are basically two ways of doing that so one is to allow for negative weights or to allow them to sum to something different than one which uh the latter case turns out might be better if you have non-negative outcomes like count data and this is a method that i i have a paper that's going to be published in epidemiology in a couple of months uh where i suggest doing this if you have count data but it's really similar to the ideas in the augmented synthetic control method which is what what that does is basically allow for negative weights to handle this uh bias here uh and this is just an example from from the paper that i'm working on uh where the original synthetic control method fails to find a good control for for sweden in uh where i'm looking here at a road safety policy on fatality rates and road traffic accidents and uh basically sweden is at the extreme end it has has been for quite a long time uh it's one of the lowest uh fatality rates in the world in terms of road traffic accidents it's difficult to find the synthetic control based on interpolation or you're not going to be able to so allowing for extrapolation in that case will give rise to a better control basically so there's also been some progress in developing statistical inference methods so actually getting confidence intervals for synthetic control methods so i'm not going to go into the details here but some relevant papers for those of you who are interested uh or i think these three there may be more but i think these ones appear to be the most useful to me uh especially this last one which is a practical and robust t-test uh that is pretty easy to implement so if you're gonna look at any of these uh i would suggest uh looking at the the last one here and just to sum up here so this is just a list of these software implementations so if you're interested in implementing this method using this method there are a couple of packages so the original synthetic control method can be applied using the synth package for for stata and r or matlab uh and the other generalizations you have to turn to r uh to to apply i think uh if you know of other packages for other programs you know maybe you'll tell me later but uh the generalized synthetic control method is the g-synth package if you have multiple outcome measures it's a package called ms mscmt and the augmented synthetic control method is implemented in augustine all right so some final remarks before we get to the discussion uh so goal right of scm is to provide a data driven framework for selecting appropriate comparisons and policy evaluation studies the data requirements are a bit steep which you know potentially will limit applications the number of applications that you can use it for uh but some recent developments in in the synthetic control literature have greatly extended the amount of situations where it can be applied so i think uh that's it for me thanks for listening thank you ever so much carl it was absolutely fantastic uh such a clear and concise um run through what's become a very influential method absolutely fantastic thank you ever so much
Info
Channel: Department of Social Policy and Intervention Oxford
Views: 1,434
Rating: undefined out of 5
Keywords:
Id: e5hmK5GzCHc
Channel Id: undefined
Length: 47min 21sec (2841 seconds)
Published: Wed May 05 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.