The Future of Work Conference 2018: Intuition, Expertise, Learning, Humans and Machines

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so for our next fireside chat I am very pleased to welcome itës director professor Erik Brynjolfsson back to the stage joined by Daniel Kahneman professor emeritus at Princeton and Nobel laureate among his many prestigious affiliations dr. Kahneman is a member of the National Academy of Sciences the philosophical Society the American Academy of Arts and Sciences and he is a fellow of the American Psychological Association please help me welcome them to the stage I was a little worried there she was going to read all of your awards there was no time left for our panel but welcome to the stage Dania we had a so glad you could join us here we had a good time just now listening to the last panel I think that's a good place for us to pick up it was about the biases of humans and machines you are I think it's fair to say the world expert on on human bias sees having more or less developed a field a lot of economists are very jealous because while you got your Nobel Prize in Economics you were just telling me back there that you never bothered to take an economics course that was probably an advantage but but so one question we can dive in picking up on um oh we were just hearing that fascinating panel and all the insights from that is we heard about algorithmic bias we had a little bit about human biases you know something about both in fact I know you're writing a book and on some of this what do you see is sort of the let me put it one way what are the bigger risks the human biases or the algorithmic biases well I mean I think it's pretty obvious that it would be the human bias either it's pretty a risk in in the sense that what happens with algorithms you can trace you have and and you can analyze much better than you can when you're talking about decisions of humans so I would say human biases are the real problem the there was a very interesting issue that was raised here about a firm that is very sexist and so if you're predicting success in that firm you're going to end up penalizing women and that is undoubtedly true now what should be done if we use an algorithmic system and if you any system that if you use a valid system that is if you use a system that is designed to be as predictably accurate as possible you are going to penalize women because in fact they're penalized by the organization the problem is really not the selection I thought the problem is the organization so something has to be done to make the organization less sexist and then one as part of doing that you would want to train your algorithm but you certainly wouldn't want just to train the algorithm and keep the organization as it is the key problem is the organization yeah yeah so the so humans have a lot of biases as well and so we want to get deeper into addressing those as well so right now actually you're working a bit on a new book thinking no I'm sorry on noise yeah what it's called it's right and you're talking a little bit about the different kinds of mistakes people can make there's biases and there's also noise help us understand that a little bit well the the motivating anecdote I mean that really started this book going was a consulting job that I did with an insurance company and where we measured what technically is really called noise and we did that in the following way we had them construct this was really done by by the underwriting group itself we had them construct a series of six completely realistic cases and then they were administered they were given 250 of their underwriters and they were required to treat them as they would an underwriting job in their regular routine and they evaluated them so we had a lot of evaluations of the same-sex cases now the statistic that we computed it's very simple you take any pair or you take all pairs all possible pairs of underwriters we had say 50 and you compute the following statistic for the pair you compute the average of the dollar amounts that they gave you compute the difference and you divide the difference by the average so in short in percentages how much variability is there now I will ask you to think about what would you expect that number to be in a well-run organization and that was a well-run organization so the number that you get when you ask that question is somewhere 10 or 15 percent 15 percent looks a little large 10 plus we expect people not to agree on matters of judgment I mean that's a definition of judgment there that people are allowed to disagree but how much are they allowed to disagree so the executives there were saying about 10 percent the truth is 56 percent that's the amount of noise both among the underwriters and among claims adjustors it was 56 and 158 in the other that's a lot of noise and it turns out you have a lot of occupations where a single person makes decisions on behalf of the organization you know like like a triage nurse in the emergency room she makes decision and it turns out that if you have a lot of noise that sets a ceiling to how accurate you can be so noise is a mistake if you think of shooting at a target there are really two kinds of errors that you can make you can be biased and so you're shooting you know no theast of the target or you can just be shooting and scattering your shots around the center that's noise so those are two very different types of but there are things we have there to it to reduce both of them well it turns out that noise is much easier to measure because you can measure noise without knowing the correct answer you can't measure bias without knowing some completely unbiased but very noisy they meant but you can it doesn't matter you know the underwriters we had no idea what's the correct answer all we know is that that variance that variability is extremely costly to the organization so what can you do to reduce noise is much easier the first thing that you can do is you should ask whether you need those people and and whether in fact an algorithm could do better and at least in one of the cow an algorithm you do better if it's working the same an algorithm has the big advantage that algorithms have about humans and they don't have noise so you present them the same problem twice you get the same output that's just not true of people even when you present the same problem twice the people if they you know if you manage to set it up so that they forget the second time that they saw it they'll give you a different but you want to train that algorithm on lots of data points oh no not even that because it turns out that you can build algorithms that are equal or better than humans without collecting any data at all and there is a kind of a predictive algorithm that you know they're called unit weight algorithms so you take six seven eight dimensions that you are pretty sure are correlated with the outcome and you give them equal weight which can be done you know you write what you need to do is know the mean and the standard deviation so you can equate the weights it turns out that formulas that are constructed in this way about as accurate as multiple regression formulas even though some of those dimensions may be relatively unimportant well I mean you select them to be as important as possible I mean I call they're not equally important call them reasoned rules and you don't know exactly what the weights are and the correct weights depending very complicated ways on the correlations among things so unit weight turns out to be a remarkably robust kind of algorithm and the comparison between those and multiple regression turn out to be I think basically my favorite wave yeah and so that's what we're going and and this also suggests that you may be able to combine humans and machines in different ways to get better outcomes well yes I think you can combine humans and machines provided the machine has the last word because what we do know humans have a lot of valuable inputs to provide to a formula to predict a formula they have they have impressions they have judgments but what humans are not very good at is integrating information in a reliable and robust way and that's what algorithms are designed to do so but you want to reserve for humans in in when you're using the algorithm is the possibility to override the algorithm when something obviously relevant has happened so if if there is an algorithm who offers a loan to someone and then the banker realized that that person has been arrested for fraud then that loan will be that well the algorithms data set so the human sort of overrides it in that case but it really those are rare cases where in general you know if you allow people to override algorithm you lose the Liberty because they override it you are because they they override too often and they override on the basis of their impressions which are biased and inaccurate and and only law and noisy so that depending on their mood at the moment and so on they will give you different answers you're not a big fan of human decision making well you know compared to what if I would say that for a job like underwriters it it shouldn't be surprising that a simple algorithm can do just as well you're predicting things about the future your information is a limited validity and and there are many decisions that have to be made actually by the organization in the sense of deciding on what percentage of cases to accept the risks to accept and how much would translate judgments or evaluations in Twitter all this concern we heard especially in the last panel but you know as people have been talking about it for some time now that some of these algorithms may have some hidden biases built into them that we don't even realize are in there and they may be making decisions for now not just one person or a few dozen people but for thousands of millions of people with these these hidden biases built in well you know if you have an algorithm that if it's been constructed by machine learning yeah and then it it will have that's the problem that we're talking earlier success that it is defining is the bad definition you're going to get disastrous outcomes I mean that's that's clear we were talking earlier about that actress outcomes so why are you such a fan of that that well because what's the alternative okay you know if if you get disastrous outcome this is what we were saying earlier if you get disastrous outcome the problem is upstream from that the problem is with the training data which comes ready humans and going throwing it back to the humans you're just going back to this problem that's that's is there anything we can do to reduce you seem to think that there that we could do better with the algorithms though oh I think that even you know so we're going back to the problem we're discussing earlier if if you have an organization it decides to reform itself to reduce noises then you will want to chain the algorithm but you will make that change deliberately you know you will know what you are doing and you will know that for time you're losing validity but the algorithms I mean often some of these very large neural Nets we were talking this morning is very hard to disentangle how they're making the decisions they may have thousands millions of weights that go in there and so they're not very explainable to the person trying to diagnose what's going wrong with it yeah I mean that's the problem of algorithms explaining themselves is a very dry problem and I'm not technically in depth but at least me who there's some work we touched on briefly about explainable AI is one area though that's nascent the other thing I think you touched on earlier is the algorithms could potentially you know be rerun you could you could test it against various specific problems that you're worried about and see if it has those kinds of problems and then address it and then you can see if the problem it continues to emerge after you've addressed it you know the the general problem really is with the criterion by which you measure success and that the algorithm is trying to optimize and you know the conversation we had about YouTube yes and is really to that point so there was that article recently in the New York Times but the recommendation engine that YouTube has right by the way this verb Kathy or Neil was making a similar point about Facebook that about these recommendations the recommendation to somebody who has shown interest in a right-wing message of some kind the recommendation will be a more extreme right-wing message and that is done completely with you know just when commercial intentions it's not just right-wing it's no or it's it's interested in wine or sweat whatever you're you you show some interesting they show you more extreme versions of that what were the algorithm I mean online I wouldn't know more extremist but politically oh but you know you do know so YouTube is polarizing and you know which I haven't thought of earlier and and that's built in because of what it's trying to maximize if you're trying to maximize the time that people spend on YouTube this is going to be the result so this is not a conscious desire to polarize the American people this is the desire to drive engagement and therefore they find the algorithm picks things that are more extreme because we tend to like be awed by them and I will talk a little bit more interested in more extreme versions of what we think so if you are on the Left you are interested in more extreme left positions much more than you really want to expose yourself to opinions as you completely disagree with so that's built in yeah I was looking again at your your book Thinking Fast and Slow your bio said had sold 1.5 million copies which is completely wrong apparently it's six million copies so congratulations on that and it the this basic concept of system one system to some things we can decide very quickly some things require more deliberation in the context of a conference where we're talking about about machine learning I'm reminded of Andrew Aang's one second rule and I think you've talked to Andrew a little bit about this but Andrew hang a great machine learning expert tried to develop some criteria in his a previous company for which kinds of projects would be suitable for a machine learning and one of the rules of thumb he gave people was if a human can do it in less than one second it's a good chance that we can design a machine learning system to do that do you think his 1 second rule for machine learning Maps well on to thinking fast that that system one part of our brain what you know I it maps well because there is an operation of the mind I mean there is associative memory so you know when something happens a stimulus is registered it it evokes what are the examples of things that are in system one that we think quickly about oh I mean you very quickly think about causal stories for example that is that would be part of system one so that you know if I tell you that somebody you know spent the day in New York and enjoyed herself in the crowd and so on and then she came home and and she didn't have her wallet hmm now people when they're tested later they think that the word pickpocket was in that sentence it wasn't but but it was filled in and that is something that you know machine learning device it actually cuts back to our earlier thing the machines will fill in some of these assumptions bias it has to okay we only have a few minutes left before I'll go to questions from the audience I would love to get some of your questions that but one of the other things that we talked a little bit about was this idea of habitué actually let me come back to that later let's do it now let's talk about now that the well-being and the habituation issues and whether or not there is a good way to measure our happiness and our well-being through some of these surveys and whether they're capturing the essence of what makes for wellbeing yeah we were talking about that so I think Eric has shown you know very convincingly that there is a huge very significant consumer surplus of digital goods so Facebook there is launch you know we would we need to be paid a lot to give it up and much more than actually it costs so that's the definition of consumer surplus but actually that's true of heroin as well so and what we're common to both of these whether this was a need that didn't exist 50 years ago so the need has been created and now we're you know we're fulfilling it and but the basic question is really better off I mean they're better off economically and there are certain aspects of life willing to pay a lot allottee and so on that they're undoubtedly good but are people really happier than they were this is not at all obvious and and the reason it isn't is that people adapt and habituate to what they have - most of what they have so the interesting question I think when we think about well-being is and about providing various goods to people is whether they're going to get used to having those goods so that it's a laugh after a while all whether those goods continue to give pleasure and so there is really a difference between goods that adapt and those that don't and here it turns out that there are many things many of them are human contact friendship is a good that doesn't adapt it just stays good you know a weekly poker game just stays good you can do it for years and if anything it gets better so there is a difference between that and and things that do adapt and if we want to sort of improve human welfare or human well-being thinking about providing more or looking for ways for people to get more of the goods that they will not adapt to it's really a worthwhile objective it's a really interesting sort of point so it's not enough to measure GDP it's not even necessary enough to measure consumer surplus what people are willing to pay for we have to think this next level about the kinds of things that people crave but aren't really giving them happiness versus the things that that they truly value yeah and really the the essential thing here is really this adaptation that is you know you cut I mean I cannot remember I grew up in Israel and I grew up without air conditioning and and today I can't imagine life without the air conditioning I mean it must have been horrible in fact I think you know I had a pretty good time so this is a good that creates a need that you adapt to and not much changes in your overall welfare but I don't think you'd go as far as say let's just pull out all the air conditioners that are made well no you can't I mean because this has happened I mean we have the air conditioning and it's it's immediately an improvement the problem is just you get used to it so we are now used to a lot of things that we didn't have now some of the new goods or I think important two things that really matter like health and longevity and you know and but those that don't serve those obviously important objectives that we get used to and then they stop giving us anything lips it's not obvious how valuable they are although you know they could be valuable because they keep the economy going I mean you know people are willing to to pay for them and people have jobs doing them and that's the real source of the happiness is that that people are also been working on that that's a interesting perspective let's get some questions from the audience let's see if we can see I see there's one hand there and one over there so I guess we'll start right right here lights up okay you said that in an interesting point about people naturally adapting individuating to the things that they have and you mentioned the air conditioner do you think that it's inevitable that as you know we continue to progress that our expectations keep getting higher and perhaps our unrealistic so you can't imagine living without an air conditioner now because now you know what it's like to have an air conditioner who's asking that are we stuck always being unsatisfied that we get used to air conditioning and I guess I think that there's a term called hedonic treadmill and are we sort of doomed to to never being satisfied well I mean adaptation is certainly something that's built in and you know there are those classic results that by and large you know the there is economic progress but happiness you know there there are no survey questions that track economic progress by and large survey questions show happiness remaining pretty much constant so even though she deeply has gone up we haven't gotten and that's and there is a correlation across countries between GDP and reported will write richer countries on average our happiness definitely although a lot of that happens in the very low end the poor end the spectrum is people getting the basic needed I'm food clothing shelter it seems to level off there's been a bit of a debate as to whether can how much it continues to go up over we that even does go up after a certain yeah that haven't pulled up at the level of country huh but but over time I mean this is a we spend a lot of our time and energy trying to improve our well-being make that make us all better off is that something that that is a futile exercise or do you think that we were able to continue to you know the the point that I was making earlier if you certainly when you get air conditioning if you haven't had it it's a major improvement in your standard of living and and for a while you're just delighted with it we did we did a study once to answer the question there was a debate between my late wife and myself but whether people are happier in California she didn't want to leave California and I wanted to move to Princeton so we we had that debate and so I did this study you actually did some research on I did some research on that and we compared students in Ohio and in and in Los Angeles and you know what turns out to be the case that people are not happier in California but everybody thinks they are including the Californians and the opium and the Cowboys and the people who hire also thought oh absolutely and so you could imagine somebody from Ohio deciding to get himself happier by moving to California and it probably won't be happier but he'll think he is happy because he will remember Ohio and and that will keep the contrast going so there are real subtleties are twisting my mind here but let's get some questions over here is there yeah over here there's the clearest go ahead you started the talk off by talking about like the dangers or the badness of noise but Arianna mentioned in her talk sets session where she got alone because she walked into a bank and she mentioned that there is like and no circumstances that an algorithm would have gotten her low and so I just like to hear your thoughts on but potentially upset heard that so so in the previous panel Arianna Huffington described how she had been rejected from many loans perhaps through rules algorithms then she walked into one and that person gave him alone she implied maybe he wasn't following the rules or whatever wasn't you know what wouldn't have normally given it to her but it worked out for her and and look at her now but it worked out as a good decision for that bank to I think well you know there will always be cases like that there will be you know there will be exceptions when you are turning down people for loans there some of them who should have gotten the loan and but if there is noisy individuals I mean what they do is you take the best prediction than optimal prediction and then you flip at random some of the optimal predictions to make them more random this is noise this is what know is and could not make it better things hmm and could that make us better off by flipping some of there's only one condition under which a lot of variability is a good thing and this is if it helps selection so if there is feedback in selection that's how a dilution like a learning system it evolution also you have mutations and and then they get selected or selected out and that that you want a lot of mutations making the same people over and over it may be you would have been better off trying some new people who then later would have only if you give yourself a chance to recognize it so in fact that is credit that is a very important thing to do for organizations that are selecting personnel is to keep trying to hire people that they would normally reject and in order to figure out to learn to learn about the validity of this maybes dynamically optimal leave that was not statically optimal yeah so I think we have maybe time for another couple of questions I guess there's one back here yep so you're both social scientists could you talk about how you see machine learning potentially changing the process of doing science and whether it will extend into machine learning and artificial intelligence generating hypotheses rather than just being a methodology that researchers use in the same sort of current process like will we discover fundamentally new things and will AI actually take on more parts of that process in in science really interesting questions so it points out that we're both social scientists and the question is whether machine learning well how will machine learning change social science and whether it will do you think be able to help generate hypotheses and formulate problems not just be used to to assess them clearly it's something that is going to happen is that there's more and more big data being accumulated right and they will be analyzed and a lot you know there will be a lot of learning I want to react to something that was said on on the previous panel ok about about the ability of AI to decode human interaction so maybe it was done in the green room and not and not on the panel my my guess is that AI will be very very good at decoding human interactions and human expressions and the reason they will be if you imagine a robot that is that sees you at home and sees your interaction with your spouse and and see things over time that robot will be learning but that doesn't matter what all robots learn will be learned by everyone this is like like self-driving cars I mean it's not the experience of a single individual self-driving car so the accumulation of emotional intelligence would be very very rapid actually once we start you can learn from multiple people there was a interesting survey about asking you know trying to predict how well people would like a cruise or other vacation options and one of the best predictions was not just what you had done yourself but what other people like you had done and if a machines system has the ability to draw on that other information you know it's just in terms of building a predicted what is going to happen is that those robots inside the homes are going to collect big data very quickly I mean there's going to be a massive amount they have a lot more data and they consume so that might help with with making some of these decisions I still think today I would be on the side that mostly humans have a huge advantage in the problem formulation hypothesis and the machines can be very useful in assessing it whether that will you know someday I mean you can imagine that evolving but I think right now that's a pretty good division of labor actually the reason I'm talking about emotional intelligence is that I think a lot of it falls into the one second ok the thing it's into thinking fast I mean I'm not talking of you know figuring out what the other person's motives are it's just the kind of intuitive understanding that we have those machines are going to a better intuitive understanding that we do so that's another thing we can put in the column for machines versus think I think we have time for another question right over here somewhere not sure there we go ok yay thank you very much underlying everything that we've heard today is about the integrity or for me it's questioning the integrity the validity the reliability and the generalizability of the data that we're doing analytics on and we know from the last politican ok the questions short I'm gonna repeat so what can we do to ensure that when we look at data like in the last poll we all thought the result was going to be different data is almost self selected how do we control for the biases in the way that we're collecting data so this is coming back a little bit to the thing we talked about earlier about the biases in the data is that right it's how we call its representative ok so what can we do to address the fact that sometimes data is collected in a biased way and not representative and address those kinds of concerns well I mean is there anything we can do systematically to address it no I mean you know you are going to do that the way that you do science in general are you going to try to to get data that that capable I think you mean you are deciding what data to collect all the time being you know I mean we clearly some areas that are areas where people exercise judgment today will be taken over by by algorithms how it will go and so on you know that's that's but you don't seem very worried that these algorithms will perpetuate biases or even amplify them know let's say big worries about the algorithm but biases or not what's your big worry about algorithms well I mean what this meeting is about you know what it will do to people ok it will create superfluous people where they will destroy good jobs and so on so ok those are the big worries well I don't want to leave this on an unhappy warning because I always got a smile on your face so let me just wrap up I don't think you're gonna like this question because I've asked you before but but do you do you have any advice on happiness you are a person who studied this a long time you you're in your 80s and we're all curious about how we can live happier lives and you've thought about it a lot so so what wisdom can you share on that well you know I've I've said one thing which is really think in terms of goods that you will not adapt to and look for those and a lot of that is social and another aspect is if you look at your life the finite resources time and so thinking about how you spend your time because that's what you got optimizing your use of time in some way and that is very important too so those if you know if I have to have some advice about happiness those really would be the two pieces of advice well this has been an optimal use of time and I'm very gracious that you shared it with us thank you very much
Info
Channel: MIT Sloan Alumni
Views: 13,497
Rating: 4.8596492 out of 5
Keywords: MIT, MIT Sloan, MIT Sloan Alumni, MIT IDE, IDE, Digital Economy, Future of Work, Daniel Kahneman, Erik Brynjolfsson
Id: TGdfMKzyN88
Channel Id: undefined
Length: 35min 59sec (2159 seconds)
Published: Thu May 17 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.