[MUSIC] I want to divide the talking to three parts. Originally, I was going to do five parts, but then I read earlier this week that there was a professor at New York University who got fired for trying to do too much in his classes. So I will try keep to three. So the first part, I want to talk a little bit about the last year, which has been a very eventful year for me. Second, I want to talk a little bit about what kind of research I do and in general about causality. Third, I want to make some comments about some of the work I've been doing with some of the tech companies in the last ten years or so since I moved here to the GSB. And so the part of the talk, I'm going to reflect a little bit on them, what I've seen there, what I've learned there. And what I think we need to do at the GSB to ensure that the students here are well-prepared to be effective participants in data-driven decision processes at such companies. And so at the high level, that doesn't necessarily mean being able to do all the analysis yourself, but people need to understand these analysis. And they need to be able to ask the right questions, and I'll try to illustrate that. So let me first start by sharing with you some of my experiences from the last year, and this is a particularly nice time to do this. because coming Monday, at about 2:15 AM Pacific Time, the President of the Royal Swedish Academy of Sciences will, again, be making a couple of calls to some lucky economists. Telling them they've won Sveriges Riksbank Prize for Economic Sciences in memory of Alfred Nobel. I'm giving the full name of the prize here because, although, it's often referred to as a Nobel Prize, it was first awarded in 1969. It wasn't part of the original five, the Nobel Prizes. But it's part of the same organization now, and they used the same criteria. But so one funny story here is that last year in November, a Swedish TV sent a crew over here to make a program about the laureates in economics. And so they spent a day and a half following me around, and then a day and a half following David Katz, and then day and half for Josh Angrist. And so out of that day and a half, they condensed that to ten minutes. But a key part did include this exchange at the breakfast table, where my ten-year-old daughter asked me if this wasn't really just the fake Nobel Prize. >> [LAUGH] >> So but back to a year ago, the week had actually been a pretty good week and household. My wife had just been elected President of the American Economic Association, that wasn't quite as much of a surprise as it could have been. because it's kind of a Soviet style election, but there's only one candidate. >> [LAUGH] >> So the honor was more in getting nominated for that in the first place, but it was a very good thing nevertheless. And in addition, earlier that week, I like to go bike. I'm from the Netherlands, originally, I like to go biking, and I broke my personal records going up for many of you probably know where that is. But my previous personal record was from four years earlier, so I was actually very pleased breaking it, I think it was 24 minutes 31. So the world record's around 14 minutes, I'm way behind there, but I was pleased nevertheless. But so it was a good week, then Monday morning, at 2:13, my phone rang. Then I looked, it actually turned out it had rang little earlier at six minutes past two. And I saw that call had come from Sweden, so that was a very, >> Indescribable. >> There you go, it was very indescribable moment. So I picked up the phone, and so the interesting thing is that a call comes from someone named Adam Smith. >> [LAUGH] >> And so for the chemists and the physicists, that may not really be that unusual. But for the economist, that's really unusual call to get, because we associate that name more with someone that was a very good economist, but he died a long time ago in 1790. So then the kind of pretty much the first thing to ask is if you're actually willing to accept the price. And actually, I said, well, before saying that, who am I sharing it with? And it turns out I was sharing with David Katz that was a former colleague of mine at Berkeley and Josh Angrist, who was a colleague of mine at Harvard long time ago. He was actually very close personal friend, he was the best man at our wedding, so I was extremely pleased to be sharing with them. And then I said, yes, I was happy to accept that. Then he said, wow, you have about 20 minutes before we announce this at the press conference. I would like you to call in so you get a cup of coffee because it's going to be a big day. And so later, I talked to one of the people from the Nobel Foundation, you get a distinct impression that they really like giving these prizes to people in California and kind of waking them up in the middle of the night. >> [LAUGH] >> And I've seen some more of these videos now, where they talk to these people five minutes after they've woken them up, and it's sort of clear they're not very coherent. I wasn't very coherent in the press conference, but that's sort of part of the mistake, and it makes for very interesting social media. The year before, there were two other people here from Stanford who won the prize in economics, Bob Wilson here from the GSB, Paul Milgram from the economics department. I actually live on the same street and Paul didn't answer his phone, so this his video where Bob Wilson walks over to Paul's house, which is just half a block. And actually, we used to live on the same street in the early 2000s, it's a very good street apparently, yeah. So Bob Wilson walks over to Paul and rings the doorbell, and there's a video there automatically recording this report. But Bob goes, Paul, Paul, wake up, wake up, you've won the Nobel Prize. So that video made a big impression on the Internet. But so after that, very quickly, actually, the Stanford media team showed up, they were apparently very well-prepared kind of in general for these things. He showed up with a whole team helping to handle some of the media increase, taking pictures, taking videos and sending out a series of clever tweets. So here are some of the pictures from very early that morning. The three kids got very excited about the whole thing and then and started making breakfast for the Stanford crew. And so here on one of the other pictures, we're looking at some of the emails coming in that my son was then actually managing. So it just made for a very exciting morning. And clearly Twitter is actually a perfect medium for this type of events. There were just a huge number of them coming in. My favorite one is that Stanford send out is this one they're here, which I am happy to use whenever my kids are complaining these days. But Stanford just did incredible job, one of my former students, Paul Goldsmith, I think him kind of commented at some point that man, the Stanford media team is really killing it. And this is one place where they clearly beat MIT and Berkeley where Ingress and correct arm. What are some of the other funny things that happened that day? One was that National Public Radio kind of following in a long tradition. Confused causal and casual. And so they told their listeners that the part of the price that Joshua Rix and I shared was for our study of casual relationships. And so it Susan wasn't particularly impressed that. >> Another thing is that sort of at Berkeley, where I used to be, there's actually an additional benefits coming from within the Nobel Prize. You get your own parking spots. Now, I don't actually need at that here because I, we live on campus. So I actually ride my bike to campus. Then later I find out that at some places, including in the Netherlands, they do actually give reserve dedicated parking spots to Nobel Laureates. But then so as I was tempted to go ask the Dean if I could get their bike packing spots. But I find out that the most recent Nobel laureate in Netherlands, had one of the spots. And then had his bike stolen two weeks later, Presumably the thieves figured out that there was going to be a valuable bike and at least a memorable one. So I decided to just stick with packing it anonymously here in front of the GSB. Now, second thing is that sort of since getting the price. The other things he says that I have to do a lot more gentle talks before I was giving a lot of typically give academic talks. Now, some of these audiences are much more general and that obviously creates challenges. So what I've done going over this is what I do, with most of the challenges coming my way as academic. I sort of tried to talk to people try to find people who are experts on this do some research on this. And some parts of it have at least improved. But one of the most challenging one was I had to give the commencement speech at my son's high school graduation. He was graduating last year. So that was a particularly tricky audience that at some level it was just the audience of one that mattered here. I obviously, the worst outcome would be if my son was unhappy with that. But we have two more kids at the school. So I also had to make the other constituencies happy, including the parents, whom we interact with the family, as well as the teacher. So I was very worried about that for a while. So I spent some time thinking what to do there. But what I actually did at high school commencement was talk for a little bit about how, I thought about how I got to the topic for that. And so I start to describe how I went to Brown University for that commencement a couple of weeks earlier. I'd gone to Graduate School there, and I went back for commencement because they gave me honorary degree there. And so at Brown I had a chance to listen to some of the commencement speeches there. There were some very interesting speeches that commencement including Nancy Pelosi, but also that actually sorry, I forgot to even though I did not get a parking spot. I did that and get a street named after me in the Netherlands in front of my old high school. So that was that [APPLAUSE] it got not quite sure what to do with that. But it was a very interesting experience, but back to the observant to Brown for commencement. And so one of the commencement speakers there was Shaggy. Now, I had not heard of Shaggy at that time. And my kids had not heard of Shaggy, then. But there's sort of a group in between including most of the teachers at my son's school. We had heard of Shaggy. And so when I brought up his name they all perked up because they were kind of curious where that was going to go. And so for those of you who don't know who Shaggy is he is a reggae artist. He had a big hit and in 2000 called it wasn't me. So I started describing this and now the teachers got very interested to see how I would pull it off because the song is about this guy who cheated on his girlfriend. And he goes to shaggy said what and his girlfriend caught him. And he goes to Shaggy, said, well, what am I going to do? And Shaggy said, well just deny it. And the rest of the song sort of goes into graphic detail about all the evidence that the girlfriend has, the text, the phone calls, the videos, all the incontrovertible proof and Shaggy every time says, just say it wasn't me. And so then I told that story and I said well I just got to they're confused why they thought that was actually a good idea to kind of have that why Shaggy doing that song. A good idea in this day and age is kind of just not taking any responsibility for what you've done a completely denial of all truth. Then so afterwards I talked to Shaggy they said, well, who told you that was a good idea to kind of send the brown students into the world kind of with this message of just denying all the truth. And then Shaggy said, well, it wasn't me. So that actually work worked incredibly well therefore to the commencement because the teachers really liked it. And then the students were very confused, but the teachers were so happy but they thought it must be. They are somehow interesting. >> [LAUGH] >> And so that went very well. So let me now kind of switch to the second part where I want to talk a little bit about my research, and as Steve said, this will be on the exam at the end. But rather then do my own version, I'm going to give you a video that Stanford recorded, again, in the early hours of October 11th last year. A colleague of mine uses it in one of his classes last year, and then the students said, wow, that was actually much clearer than what Imbens said himself. So I'm not going to tempt fate here, so I'm just going to play the video here. >> My name is Carleton Imbens, it's pop, I get pop can you just. >> Susan, can you take my phone actually. >> From what I understand. >> There it is like are you are famous. >> Let's say I want to understand. [MUSIC] >> From what I understand, you take data and you kind of run your own experiments without actually running an experiment. You can take data and use it as if it was your experiment. >> Yeah, that's right, in a lot of cases where we try to get the answer to important questions we do experiments. When they tried to figure out if the vaccines for COVID were working, they did all sorts of experiments. But a lot of cases in economics you can't do experiments, it just doesn't work. We can't say, well, you go to school, and we're going to have this whole other set of kids and they're not allowed to go to school. >> Yeah, that's not. >> So that wouldn't work. So we need to kind of try to tease out those things from data, where people just make their choices, and kind of do what they want to do. And so we kind of tried to come up with clever ways of still teasing out these effects. >> And so you might like use like behavior things or? >> Let's say what I've done is a lot of the methodology, kind of trying to help people understand exactly how these things work and giving them better methods for doing these things. And so another study that one of my friends who won the prize with me, Detanglers, he was interested in the effect of getting more schooling, getting more education on learnings. And he used the fact that compulsory schooling laws changed a little bit, too. If you're born on September 30th, you need to go to school earlier than if you're born on October 1, because it doesn't really make you a different person. But it makes you get, on average get a little bit more schooling. And so kind of he used that as an instrument, kind of as a way of teasing apart the correlation and the causality. And he could see from there that the people, we stayed in school just a little bit longer find out that they were actually having higher income later >> It's very interesting- >> Thank you. >> How you can take data from things that were completely not intended for you. >> Yeah. >> Or anything and then use it just to draw these astounding conclusion >> I'm Andrew Imbens, spelling of my first name is A-N-D-R-E-W. I'm going to assume my last name is- >> [LAUGH] >> [LAUGH] It's the same as mine. >> [LAUGH] >> It's okay. >> Congratulations. >> Thank you, thank you. It was a very exciting, morning. >> Yes. >> [LAUGH] >> Yeah, so I was wondering about some of the applications of what you've been doing. Because we've talked about like what it is, but so what might you? >> Yes, so kind of in general that you're interested in what would be for social policy, it's kind of important to know what would happen if you give everybody some guaranteed income. What would that do to society? Would people still try look for a job or would they kind of just be happy to sit at home? Because you can do an experiment there, you can look at people who play the lottery, because then, some of those people are going to win some big amount of money. It's kind of like having a basic income. In fact kind of when the lottery we looked at it in Massachusetts. If you win half a million dollars, you're wouldn't get a check for half a million dollars. You would get a check for $25,000 every year for 20 years. So that's kind of very much like having a guaranteed income. We kind of looked at what happens to them, did they stop working? Did they retire early or did they keep working? It turned out most of the people really just kept working. It was kind of very nice for them to win the lottery, but it didn't really change what they did. That helps kind of inform public policy. And it gives you a credible way of looking at what the effect is of having some income, which would be very hard to do otherwise. >> All right, so I'm a little interested in making sure that I understand this. So if I have to explain it, I know my stuff. Let's say I want to understand whether not having homework for math specifically makes people enjoy it more. And so I've got my school. >> You can well imagine that would be the case. >> Right, yeah, exactly, I don't like homework, I do like math. I don't have math homework, maybe they're correlated. So my school advertises itself for kids who love math and they don't have math homework. Since we need more data than just that, we'll look at the other schools in my area. Palo Alto High School and Gunn High School both also take from the same batch of students in the Bay Area and they do have math homework. What you're saying is that I can't just take a bunch of students from Proof School and a bunch of students from Gunn and ask who enjoys math more, we need a better test group. And so what you're saying is that I should go to Gunn and Palo Alto High School and I should find students who not only applied to Proof, but gotten into Proof. They're clearly kids who could have gone to Proof school, but didn't, and now have math homework. >> Yeah, and so you kind of could also look at kids who kind of didn't go to Proof but were really interested in math, but it was kind of too long commute. So they live far away from the train station and it would have been a long bike ride or drive to the train station, but otherwise they would have gone. And obviously living far away from the train station is probably not really correlated. It has nothing to do with liking math or not. >> You wouldn't expect to it. >> Yeah, and kind of more generally, there's a lot of cases where you want to compare these two groups, but you're worried about them being different and not in other ways. You can look for these small things that make people that change the incentives a little bit, but that do not have anything to do with, who they are, what their preferences are. >> Right, so it all sort of boils down to, if you want to know whether thing A has an effect on people, you compare really similar people, you want to make them as similar as possible and all the ways except for thing A. >> Yeah, and you kind of use these instrument they kind of change these incentives to be in one group rather than the other. >> Things that people don't have choice over, so that- >> Yes, exactly. >> So it's like a randomized study. >> Yeah, exactly, and set up for that small group, then it becomes like a randomized study. It becomes like an experiment. You get the benefits of a experiment without actually having to run an experiment. >> Incredible stuff, clearly. >> [APPLAUSE] >> Thanks. >> [APPLAUSE] >> So now what is an example of one of these natural experiments that Carleton and I got the price for? And I alluded to that in the video that was one study I did when I was at Harvard with a colleague of mine, Don Robbin, and a student, Bruce Salcerdote. Whether you were interested in the effect of unearned income. What would be the effect of having a basic income, kind of on a variety of outcomes? Initially, actually we're interested in the effect on children but there were not enough people in the sample in the end to do much. But that's what we looked at the effect, how they effect people's labor supply, how much they work. And so, it's clear that it's a question that's important for deciding on social policy. It's difficult to get at that directly and so what the economists had done in the past was do things like, look at the effect of spousal income on individuals labor supply. But it doesn't really work very well. If you were to compare, say, men whose wives make 100,000 with men whose wives make 50,000. You don't see that the man with the higher earning spouse is worth less than the men with lower earning spouse because they're very different. There's a lot of, so of matching these days in marriage, and this is actually a lot of research on that Raj Chetty at Harvard is kind of looking at a lot of these things. Where the correlation in characteristics between spouses has gotten much higher over the years. So that doesn't really work very well. People have also looked at the effect of inheritances. But again, that's not even remotely random, it's something people often anticipate. And there may be other transfers prior to that. So what we did here in this study was look at people who won the lottery. And this particular lottery, and then you would get paid over 20 years. So you would actually get the regular payments for a long time completely unrelated to your own income to your own decisions. Now then it turned out there was still kind of some challenges. The people were fairly different to begin with the people will want large amounts of money because of the non response type issues. But in the end that allowed us to get very precise and very credible estimates of how much people would work less per dollar they would actually receive. And it's on the order of 5 to 10 cents less per dollar, in some sense, they consume leisure. They use some of that to reduce their hours and have more free time. But so this is a way where earlier work had a very hard time coming up with credible ways of the identifying that doing an experiment clearly was not something feasible. That would be incredibly expensive and take an incredibly long time. But data out there would allow us to cleverly get a handle on these issues. Now, for the last part, I want to talk a little bit about the work I do at at some of the tech companies. So, I started doing that when I moved here to the GSPE, and it's been a great learning experience. It's been very inspiring for my research, and I've learned a great deal. But sort of one, in time, I was in a meeting where decision was going to be made about either implementing some change in an algorithm or not. And you could imagine sort of at Airbnb you decide to make the way you show the properties. You change the way that you show the properties. You put videos up there instead of just pictures or you put more detailed descriptions. And so that they wanted to know what the effect was going to be on rentals for doing that, and so they they ran a randomized experiment. And now suppose and I'm making up these numbers, suppose that the estimate was about $3 per unit per year or something. That's not particularly big but you multiplied with a lot of customers that was a big number but the standard error was $2 What does the mean? Kind of, it means we're not really so sure about it, this $3, in fact if there was nothing there. If this change made no difference whatsoever that you could easily get an estimate as big as three. If the standard error is two, so the technical way of saying that is that the P value is the probability of getting an estimate that big if the effect there's nothing there is about 0.13. So that's fairly big. In fact, it's so big that the technical term is that we say it's not statistically significant. So the question is, what do we actually do with that? And there's a tendency in the academic literature and kind of also in outside of that to kind of take that very seriously. Often and tables we make clear which estimates are significant. Can we put stars on them? So this is where I was reminded of the story by Dr. Seuss. Those stars weren't so big. They were really so small. You might think such a thing wouldn't matter at all. But because they had stars, the star belly sneetches would brag, we are the best kind of sneetches on the beaches. So these things make a big difference, they make a difference in academics where often there's a tendency not to publish papers that don't have significant results. But even outside academics, it makes a big difference for drug companies when they do that trials. If things are not significant that can have a huge impact on their valuation. And in fact, there was a court case because in academics what you see is that people try to change the specification, do some additional analysis. Find some way of getting to the stars, and you can see that by looking at, here's a table of kind of the distribution of all these p values. You see that people clearly try to find some way of getting to the promised land where they can put stars on these tables. In fact now, many journals, actually use the start off machine from self estimate monkey make bean. And we don't allow people to actually put stars in these tables, to emphasize the fact that these p values are not quite that important. And there's sort of a lot of discussion in the academic world, on that. But it's also the outside of academics where there was a court case where the pharmaceutical executive was in the end convicted by manipulating P values. So what he did was not actually calculate them incorrectly, but he took a sample where the strike didn't seem to have much evidence of any efficacy. The P value was too big. And he then said well let's look at a slightly different sample where we throw out people who are really sick because maybe the drug doesn't do anything for them. And maybe it's throw out some of the people were doing really doing very well and so the drugs doesn't do anything for them. And then for this middle group, carefully chosen, there was an effect with a P-value of 0.004, and he got into big trouble for that. So back to the question in this discussion, so we have this point estimate of three, standard error of two. It's not significant, does that actually matter? At some level it doesn't. And the person in the room who's actually in charge of making decisions said well okay, we have an estimate of three, so that's good. So it'actually seems more likely that this is actually doing something in the right direction. So why do we really care about the standard error? Do we care about the standard error? That was a very astute comment, why do we actually care at that point? And in what way should the standard error really affect the decision we're going to make? And so one way to think about it is that, first of all, clearly we do care about the standard errors. If we didn't care at all, we could just do really tiny experiments. We could do just take two people, randomize them and one of them see the videos and the other one see the pictures. And then make a decision based on that. And clearly that would not be a sensible thing to do. But why do we care about the uncertainty there? Now one thing you could argue some way we are risk for us. If we get a point estimate of three, but maybe it could just as well be seven as minus one. And maybe seven would be great and this would become the biggest company in the world. But minus one, they will be bankrupt, so maybe they really don't want to take that risk. And so if there's a lot of uncertainty, they wouldn't make these decisions to go ahead, even if on balance it looks like it would be good. But in many cases that's actually not really the real reason. In many cases these decisions actually not that big, so you're not really risking the whole company by making that decision. So what's really the reason for being concerned about the uncertainty? For some level is because I think, in many cases, these ideas we're trying out are bad ideas. Most of them are probably bad ideas. And the whole reason we're doing experimentation, is to figure out what are the good ideas and what are the bad ideas. But most of the ideas, especially kind of in these more mature companies, they've gotta be bad ideas because they're already reasonably close to doing well and being effective. So you can actually look at it. So here's a table from a paper by statistician here at Stanford, Brad Efron. So this is not actually from the tech companies here. This is just medical experiments. He looks at a huge number of medical experiments and looks at the ratio of the point estimate and the standard error, the C statistic. I see that most of them are close to zero. Now, kind of all these estimates have some uncertainty in them. So he can actually decompose that and get an estimate of the distribution of what is essentially the value of all these ideas, okay? You can think of it as each time every idea has some thrue intrinsic value, and then we do an experiment to estimate that value. And what you see for the distribution of these two values, and this is kind of for medical experiments, for drug trials, where the costs are very high. So they try to really make sure that these ideas are good before they do any experiments. And still, most of these ideas are bad in the end. And probably in the sort of the experimentation in the tech companies, it's even worse, kind of most of these ideas are bad. But that's not bad in itself, because the experimentation helps us figure out which ones are the good ones. But in order to do that, you can't just take the point estimate. You need to combine that with what you think is actually the distribution of the true value of these ideas. So you can think of it as every idea sort of comes from this distribution. And now you learn how good that particular idea is. But you don't just want to take the experiment, you want to combine it with what you thought beforehand. And so it takes a fair amount of evidence before you should be convinced that a positive estimate is actually really indicative of a true positive effect. You should shave it back, shrink it back towards what you thought before you did the experiment. And if you think beforehand, and all the evidence is there. If you think beforehand that most of the ideas actually bad, it's good to take a lot of evidence, a lot of precision in the experiment before you're convinced that something is actually a useful thing to implement before you should make the decision to go ahead. And so the reason we care about uncertainty, it's not intrinsically that we care about P-values. But what we care about is how it changes what we thought before we did the experiment. And before the experiment, we were fairly skeptical that any of these ideas were very good. And now it takes a fair amount of evidence to convince us, well it should take a fair amount of evidence to convince you that this new idea is actually a good one. And so if the standard error is very small, that evidence is there. If the standard error is very big, that evidence is not there. And you shouldn't be convinced by that new idea. So formally, the way to think about is from a Bayesian perspective. We're making decisions. We're not trying to test hypotheses about what the truth is, which is where we're originally these P-values were designed for but they're still very useful. It's still the case that sort of in high energy physics, they use P-values to decide whether they found a new particle or not. And they're very careful they're looking for a relatively absolute version of the truth there. But that's not the way the experimentation is used in many other settings. But we're kind of pretty sure the things do something. We're just not sure how much they're doing and we want to update our beliefs to make sure that we're making effective decisions. And so in order for the students here to be well prepared to kind of participate in those decisions and understand how you actually use data and what you're trying to get out of that, we kind of need to make sure they have a fair amount of sophistication. And again, it's not about the technical part and actually doing this analysis, but it's an understanding what the statistics can actually deliver. What we're trying to get out of these experiments, how we design these experiments, and what type of problems come up in these experiments. So that people can ask the right questions and not get bamboozled by the analysts kind of we're looking at this data. And we might just say, well, but it's not significant, so you shouldn't do these things. You need to learn how to ask the right questions, and that's what we're trying to teach them here. So let me then stop there and open it up for the questions. >> In tech companies where they're trying to decide whether to change your marketing approach or to change something, features that the customers will or not like. Can you talk a little bit about how you set up appropriate experiments like that? >> Yeah, so that's an incredibly interesting question. It's an incredibly interesting area these days. For a long time, all the experimentation was really based on the work that had been done in statistics in the 1920s, 1930s, people like Fisher and and that was kind of the basis of how drug trials were done. And that was initially the basis, how experimentation was done in the tech companies as well. But in many cases, the settings are actually very different. One big thing is that while in drug trials, it's very often very reasonable to assume that there is no interaction, no interference between the different individuals. That doesn't work in many of the social science settings or in the tech company settings. So it in a drug trial, if I take aspirin, that doesn't affect whether you have a headache or not, unless I talk too much. But if you are in a different room, you're fine. It does matter for infectious diseases and for vaccines and stuff, but in many cases, we sort of can separate things. And we can look at units that don't interact with each other and say, Airbnb or that the rideshare companies, interactions are intrinsic to the whole company. That if I then try to get a ride from Uber or Lyft, that takes a particular driver off the market for a while, and so that affects all the other potential riders. So if we do an experiment, and we make it more attractive for a bunch of individuals to go get rides, that immediately affects all the other riders, all the other customers in the market as well. And that's true kind of at Airbnb, that's true for all these companies that are based on individuals interacting in a marketplace. But this interactions are very structured. And so that leads to a whole different way of doing experimentation. And so there's kind of a lot of work. There's a bunch of people here at the GSB kind of working with these companies and working on academic research. Trying to come up with more effective ways of doing the experimentation there that just looks very different from what people used to do in medical settings. >> This is sort of similar to the old adage about military plans, where they say, military plans are worth nothing when the shooting starts, but the planning is everything. So it sounds to me like a lot of this is kind of a structure of how you could go about thinking about all this data in a plan, about trying to figure out, getting a better idea of where you ought to be going. Am I misunderstanding or is that kind of close? >> Boss, so some of it, yes, because some of it, we're trying to expose, figure out how we can learn about the things we're interested in in settings where we can't do experimentation. Some of it is figuring out what type of data we really need and understanding that it may, in some cases, be very particular data that just tell us about the things we're interested in. Again in the video, one of the discussion with my kids have referred to this paper by Angrist and Krueger. We're interested in the effect of education on earnings. And so kind of just comparing people who go to college with people who don't go to college, it's fraught with difficulties. These people are different in many ways, so it's hard to get causal effects out of that. But Angrist and Krueger figured out some small wrinkle that changed how people made that decision, namely compulsory schooling laws. In other work, looked at the distance to college. Again, that kind of just gives you a small wrinkle, where it may affect how people make decisions. So it's kind of somewhat of a combination of being opportunistic as well as trying to look for the right data. But then in other places, the planning is a much bigger thing for the experimentation, thinking through how these interactions take place, the structures on these interactions. Whether people interact purely, anonymously through prices or through other mechanisms can help you design these experiments effectively. Yes, >> A question, I have read in the press about a study that was published in nature in one of its subsidiary journals. It was either something in evolution or evolution in something, and I don't remember which. But a group in New England decided that there were 61 genes that determine the probability of your living to 100. Have you seen that? And does it follow your rules? [LAUGH] >> No, I have not seen that. So that type of setting, it's, again, very hard to figure out how to separate correlation from causality. Now, there's some things you can do. Sometimes you can exploit the fact that some of the allocation of DNA is random between kids and parents. And so there's been some work trying to separate out some of the genetic effects from some of the environmental effects. So I'm not familiar with that particular study, but I think that it is a little shaky with some of the things. I mean, there's also paper that suggests that, and this is based on data from a while ago, because they, at some point, released the information about who else was nominated for Nobel prize. And they did this analysis suggesting that people live two years longer, and that seemed a little shaky. Yes. >> One hand, you want to answer these important questions and you have limitations on data or on natural experiments that you can actually use, as you were saying, to find these little wrinkles that can help you answer those questions. Now, once you find those wrinkles, let's say in the lottery study in Massachusetts. Where it's a particular lottery where people, if I understood correctly, don't have the choice whether to get the yearly installments or getting a full payment. Then given that you start having all these constraints, how do you deal with the limitations on whether your results can be applied to other settings? And then how much of the question are we answering or is it just a result for very limited setting? >> Yeah, no, that's, very good question. And so, in that particular case, clearly, we're not really that interested in that particular population itself. Kind of this was people who played the lottery in Massachusetts and we're interested. Not just the people who play the lottery but in the general population we're not interested in per se and people in Massachusetts we're interested in the general US population. And so, that clearly comes with some limitations, now but there's some things we can do. We can see how different these people are, who played the lottery from the general population in town. Because we knew for these people, we had their social security earnings records. So we can actually see match them to the rest of the population, see if they're very different. Are they different in terms of the relationship between education and earnings? Does it look like these? These are really very different people and the way they respond. And so, in that particular case, there was no much evidence that these people were very different, not even in terms of earnings. So I would have thought that we'd be more stronger correlations there but in general that is a big issue. And so, we do worry about the external validity of the studies at the same time. There is in the strategy and I think rightly so, sense that we want to focus. First of all on the internal validity. We want to make sure we get incredible estimates, at least for some well defined subpopulation. And then second, we'll worry about how much we can generalize that. And sometimes we go to fire net. There were experiments done in the 60s for headstart type the programs for early childhood interventions. And because these were done in very small community, some in Michigan kind of long time ago these programs were quite different from what headstart looks like now. But because and they were very small, there were 200 or so kits. So the evidence from that is clearly very limited in terms of its relevance for decisions that are being made today. And still, people keep bringing up the results from those programs, because that's one of the few times when we did actually randomize. And we know that at least there it matters to have these early childhood intervention programs. >> Boy these reunions are amazing. This is just magical. Holy Macrol, decision making culture and the tools that we use. I was wondering if you could share any kinds of engagements you've had regarding the climate problem we have. And just how these approaches are being used to find a way to navigate out of this life threatening issue of climate change and things like that. Thanks. >> Yes, I don't really have any direct insights and one of the things that is actually the most challenging after winning Nobel prizes. That you get asked a lot of things that are outside my direct area of expertise and so it's clearly a huge problem. And so a lot of the methods I work on, kind of I used and some of the settings, it's trying to figure out what is the causal effect of changing things here. On how the agriculture is going to look like, what human development is going to look like. And we can't do experiments in these cases, so we're going to need to use observational methods to shed light on these questions. But I haven't done any direct research on that, I'm hesitant to say to go much beyond that. [APPLAUSE] [MUSIC]